MPE/iX Networking Enhancements

                       

Abstract

 

MPE/iX networking lab, has worked on a few enhancements  and  improvements to networking stack.  Here, I would like to share some new enhancements and improvements done to the MPE/iX networking.

 

The main  enhancements and  improvements  are on increasing some of the

TCP/UDP  limits,  implementation of traceroute  utility,  and resolving  a  TCP performance  issue.  The network  limits  increase  include  increasing total number of TCP connections and total number of UDP sockets.  These enhancements  will help our customers have  increased  connectivity  to their  HP  3000  machines.  There  have  been  many  requests  for  the traceroute utility on 3000-L news group.  We have developed  traceroute utility  which  will help trace a route from one host to  another  also providing  round-trip  delay  between  source  and  various  routers in between.  This  can be also  be  used  to  find  out  paths  which  are introducing  large delays.  TCP performance on MPE/IX is another issue and solution to the receive  throughput  problem is an  information I would like to share along with other information mentioned above.

 

Increase in number of TCP connections

 

Currently,  TCP on MPE/iX  is  allowing  a maximum of 10240  TCP connections.  Now we have made an  enhancement  to TCP module to go up to 20,000  connections.  The  version  ID of TCP  (NET_TCP_VERS  in nmmaint,3)  which supports this feature will start with 'B0605' (6.5 release).

 

Configuration through NMMGR

 

You need to configure  the machine  using NMMGR to allow it to go up to 20,000  connections.  The field, "Maximum number of  connections" can now handle up to 20,000  connections.  Before  this  enhancement the range was from 0 to 10240.

 

To reach the required NMMGR screen do the following -

 

1. Go to [ Open Config ]

2. Go to [ NS          ]

3. Go to [ Unguided    ]

4. Go to [ Netxport    ]

5. Go to [ Gprot       ]

6. Go to [ Tcp         ]

 

After the re-configuration don't forget to stop the network if it is already started and restart the network.  Now TCP is ready to accept up to 20,000 TCP connections.

 

Increase in number of UDP Sockets

Currently,  UDP on MPE/iX  is allowing  a maximum of 10240  sockets.  Now we have made an  enhancement  to UDP module to get up to 10,000  sockets.  The version ID of UDP (NET_UDP in nmmaint,3) which supports this feature will start with 'B0605' (6.5 release).

 

Configuration through NMMGR

 

You need to configure  the machine  using NMMGR to allow it to go up to 10,000  sockets.  The field, "Maximum  number of UDP sockets" can now handle up to 10,000 sockets.  Before this  enhancement the range was from 0 to 4096.

To reach the required NMMGR screen do the following -

 

1. Go to [ Open Config ]

2. Go to [ NS          ]

3. Go to [ Unguided    ]

4. Go to [ Netxport    ]

5. Go to [ Gprot       ]

6. Go to [ UDP         ]

 

After the re-configuration don't forget to stop the network if it is already started and restart the network.  Now UDP is ready to accept up to 10,000 UDP sockets.

 

Traceroute Utility on MPE/iX

 

There have been many requests for the traceroute utility on 3000-L news

group.  The traceroute utility will help trace a route from one host to

another.  The utility also provides round-trip delay between source and

various  routers in  between.  This can be used to find out paths which

are introducing large delays.

 

Currently,  the  traceroute  utility  implemented  on MPE/iX is undergoing

alpha testing.  Attached below is an output of sample run of traceroute

utility (tracert) on MPE/iX -

 

:run tracert.group.acct

 

Enter Host Name[Press Return to exit]:  pavan.india.hp.com

Resolving IP Address.....

Enter the number of hops[min=1,max=60]:

Defaulting to 25 hops

Traceroute to pavan.india.hp.com,hops=25

To TERMINATE Press CTRL-Y

#1 15.44.48.1  (atlagw2.atl.hp.com)  330ms 378ms 390ms

#2 15.44.48.1  (atlagw2.atl.hp.com) 401ms 448ms 459ms

#3 15.41.16.1  470ms 595ms 707ms

#4 15.24.240.2  (atlhgw2.cns.hp.com) 428ms 269ms 381ms

#5 15.88.32.1  (palhgw3.cns.hp.com) 585ms 711ms 826ms

#6 15.88.56.6  (palhgw6.cns.hp.com) 966ms 75ms 162ms

#7 15.111.31.2  (snghgw1.cns.hp.com) 453ms 769ms 294ms

#8 15.64.32.2  (snghgw2.cns.hp.com) 345ms 658ms 939ms

#9 15.10.32.1  (blrgw1.cns.hp.com) 794ms 817ms 902ms

#10 15.10.40.72  (pavan.india.hp.com)   796ms  828ms  793ms

END OF PROGRAM

 

The traceroute utility in its implementation  makes use of TTL field of

IP header.

 

TCP Performance issue

 

It has been found using performance  measurement  utilities like XPPERF

and TTCP ( ported public domain  utility ) that the receive  throughput

on MPE/iX is  considerably  less compared to other platforms (eg.  HP-UX).This problem has been traced to small  default TCP receive  window size on MPE/iX.  This problem is now corrected.

 

Attached is the XPPERF output, taken between two machines named DRY and

HALIFAX with and without the fix.

 

DRY to HALIFAX (without fix)

----------------------------

 

Enter protocol to test (TCP, X25, UDP) :t

Enter socket number (def=32000) ..........

Enter transmission type (Unidirectional/Bidirectional): u

Enter mode (Master/Slave)            : m

Enter remote node name               : dry

Changes to run-time defaults? (Y/N).. :

BEGIN TCP master execution.

 IPCCREATE ...

 IPCCONTROL (socket timeout) ...

 IPCDEST ...

 IPCCONNECT ...

 IPCSHUTDOWN (socket) ...

 IPCCONTROL (connection timeout) ...

 IPCRECV (connection) ...

 

Connection is now established.

 

Enter initial send size (def=1408) . : 100

Enter final send size (def=1408) . : 9100

Enter step size (def=1) ............ : 500

Enter # passes for each size (def=25): 2048

Hit RETURN when ready.

 

 IPCSEND (for send info ...)

 

Start RECV loop [        0]

 

Ending      Buffer size                            Elapsed

step time   (bytes)      Mbits/sec  KBytes/second  time

----------  -----------  ---------  -------------  -------

921005107          100       0.31          39.80        5

921005111          600       2.33         298.54        4

921005117         1100       3.42         437.85        5

921005124         1600       3.55         454.91        7

921005134         2100       3.27         417.95       10

921005145         2600       3.68         470.42       11

921005159         3100       3.44         440.69       14

921005175         3600       3.50         447.80       16

921005192         4100       3.75         480.00       17

921005212         4600       3.58         457.75       20

921005234         5100       3.60         461.37       22

921005257         5600       3.79         484.58       23

921005283         6100       3.79         485.62       25

921005311         6600       3.67         469.13       28

921005341         7100       3.81         487.26       29

921005373         7600       3.69         472.68       32

921005407         8100       3.70         474.14       34

921005444         8600       3.71         475.44       36

921005483         9100       3.63         464.39       39

 

DRY to HALIFAX (with fix)

-------------------------

 

Initial send size (def=1408) . : 100

Enter final send size (def=1408) . : 9100

Enter step size (def=1) ............ : 500

Enter # passes for each size (def=25): 2048

Hit RETURN when ready.

 

 IPCSEND (for send info ...)

 

 Start RECV loop [        0]

 

Ending      Buffer size                            Elapsed

step time   (bytes)      Mbits/sec  KBytes/second  time

----------  -----------  ---------  -------------  -------

 

921082548          100       0.31          39.80        5

921082553          600       1.87         238.83        5

921082557         1100       4.28         547.31        4

921082563         1600       4.15         530.73        6

921082570         2100       4.66         597.07        7

921082577         2600       5.78         739.23        7

921082585         3100       6.03         771.22        8

921082594         3600       6.22         796.09        9

921082604         4100       6.37         816.00       10

921082616         4600       5.96         762.92       12

921082628         5100       6.61         845.85       12

921082641         5600       6.70         857.33       13

921082655         6100       6.77         867.17       14

921082671         6600       6.41         820.97       16

921082687         7100       6.90         883.17       16

921082704         7600       6.95         889.75       17

921082723         8100       6.63         848.47       19

921082742         8600       7.04         900.84       19

921082763         9100       7.07         905.56       20

 

Please note the increase in throughput.  It is almost twice compared to

that before.

 

 

 

Summary

 

Over last one year MPE/iX Networking  lab has worked on some enhancements which will benefit MPE/iX  customers.  They include  increasing  number of TCP  connections,  number of UDP  sockets.  This  will  help  increased connectivity  to  MPE/iX  machines.  Other  than  these  two  enhancements related  capacity  increase, the traceroute  utility was implemented on MPE/iX.  Networking lab has worked on a TCP performance issue and resolved a problem related to TCP receive throughput.

 

   

Author | Title | Track | Home

Send email to Interex or to the Webmaster
©Copyright 1999 Interex. All rights reserved.