nanog mailing list archives

Re: NANOG Digest, Vol 94, Issue 4


From: Arthur Liew <arthurliew80 () gmail com>
Date: Fri, 6 Nov 2015 20:57:16 +0800

Hi Brandon,

Does Border6 works for inbound traffic engineering too?

We were exploring inbound traffic engineering a while ago, and had attended
Huawei RR+ demo. The idea is great, but they only support their own router
netflow currently

We are still looking for options.It'll be great if you can share more if it
supports.

Rgds
Arthur Liew
(CCIE#38181)

On Fri, Nov 6, 2015 at 8:00 PM, <nanog-request () nanog org> wrote:

Send NANOG mailing list submissions to
        nanog () nanog org

To subscribe or unsubscribe via the World Wide Web, visit
        http://mailman.nanog.org/mailman/listinfo/nanog
or, via email, send a message with subject or body 'help' to
        nanog-request () nanog org

You can reach the person managing the list at
        nanog-owner () nanog org

When replying, please edit your Subject line so it is more specific
than "Re: Contents of NANOG digest..."


Today's Topics:

   1. Re: Internap route optimization (chip)
   2. Re: Internap route optimization (Mike Hammett)
   3. Looking for a comcast NOC and/or peering contact (Eric Sieg)
   4. Re: Internap route optimization (Sebastian Spies)
   5. Re: Internap route optimization (Fred Hollis)
   6. RE: Internap route optimization (Eric Van Tol)
   7. Re: AT&T Wholesale (Alex Forster)
   8. Long-haul 100Mbps EPL circuit throughput issue (Eric Dugas)
   9. Re: Long-haul 100Mbps EPL circuit throughput issue (alvin nanog)
  10. Re: Long-haul 100Mbps EPL circuit throughput issue (Bob Evans)
  11. Re: Long-haul 100Mbps EPL circuit throughput issue (Pablo Lucena)
  12. Re: Long-haul 100Mbps EPL circuit throughput issue (Pablo Lucena)
  13. Re: Long-haul 100Mbps EPL circuit throughput issue
      (Theodore Baschak)
  14. Re: Long-haul 100Mbps EPL circuit throughput issue (Greg Foletta)
  15. Re: Long-haul 100Mbps EPL circuit throughput issue
      (William Herrin)
  16. Re: Long-haul 100Mbps EPL circuit throughput issue (Pablo Lucena)
  17. Re: Internap route optimization (Brandon Wade)
  18. Youtube CDN unreachable over IPv6 (Seth Mos)


----------------------------------------------------------------------

Message: 1
Date: Thu, 5 Nov 2015 07:18:08 -0500
From: chip <chip.gwyn () gmail com>
To: Christopher Morrow <morrowc.lists () gmail com>
Cc: Fred Hollis <fred () web2objects com>, nanog list <nanog () nanog org>
Subject: Re: Internap route optimization
Message-ID:
        <
CABGzhdu8t8HW8RC4Tk+XWtrEtJZZuia_DcN-A2E_7TUv-PBGiQ () mail gmail com>
Content-Type: text/plain; charset=UTF-8

Just to be clear, Internap's solution doesn't use "more specifics" to steer
traffic.  The mechanisms in place to protect yourself from normal route
leaking should apply just the same.

--chip


On Thu, Nov 5, 2015 at 5:01 AM, Christopher Morrow <
morrowc.lists () gmail com>
wrote:

Also, please, if you use one of this sort of device filter your
prefixes toward your customers/peers/transits... Do not be the next
person to leak their internap-box-routes to the world, m'kay? :)

On Thu, Nov 5, 2015 at 8:53 PM, Fred Hollis <fred () web2objects com>
wrote:
Hi,

No particular experience with Internaps optimization...however I
wouldn't be
that sure if I would use it within our networks, because you always
have
the
conflict of this not being their core business as they want to sell
their
optimized IP transit.

However, some time ago we tried Border6 in an evaluation and then
finally
put it into production. Not only the optimization is nice, but the
reporting
is so extremely detailed making it very transparent where the transit
has
congestion issues and which prefix is routed (in and out) through which
upstream.

For sure traffic engineering/optimization is not a trivial task but
requires
deep thinking and understanding of the whole BGP and routing picture.


On 05.11.2015 at 09:03 Paras wrote:

Does anyone know or have any experience with Internap's route
optimization? Is it any good?

I've heard of competing solutions as well, such as the one provided by
Noction.

Thanks for your input,
Paras






--
Just my $.02, your mileage may vary,  batteries not included, etc....


------------------------------

Message: 2
Date: Thu, 5 Nov 2015 07:21:05 -0600 (CST)
From: Mike Hammett <nanog () ics-il net>
Cc: nanog () nanog org
Subject: Re: Internap route optimization
Message-ID:
        <962573932.10921.1446729692705.JavaMail.mhammett@ThunderFuck>
Content-Type: text/plain; charset=utf-8

Keep in mind that most do not optimize inbound traffic, only outbound.




-----
Mike Hammett
Intelligent Computing Solutions
http://www.ics-il.com

----- Original Message -----

From: "Paras" <paras () protrafsolutions com>
To: nanog () nanog org
Sent: Thursday, November 5, 2015 2:03:41 AM
Subject: Internap route optimization

Does anyone know or have any experience with Internap's route
optimization? Is it any good?

I've heard of competing solutions as well, such as the one provided by
Noction.

Thanks for your input,
Paras




------------------------------

Message: 3
Date: Thu, 5 Nov 2015 08:23:03 -0500
From: Eric Sieg <eric.sieg () gmail com>
To: nanog () nanog org
Subject: Looking for a comcast NOC and/or peering contact
Message-ID:
        <
CAEcHXr6EXg2M2uc-JkuSG3EjTPU4nra-sRiUhAR0fDyKHLUKdg () mail gmail com>
Content-Type: text/plain; charset=UTF-8

If you could contact me off-list, it would be greatly appreciated!

-Eric


------------------------------

Message: 4
Date: Thu, 5 Nov 2015 14:56:10 +0100
From: Sebastian Spies <s+Mailinglisten.nanog () sloc de>
To: nanog () nanog org
Subject: Re: Internap route optimization
Message-ID: <563B5FFA.8020409 () sloc de>
Content-Type: text/plain; charset=utf-8

Hey Mike,

do you know route optimizers that actually do optimize inbound traffic?
We, at datapath.io, are currently working on this and could not find
another one that does it.

Best,
Sebastian

Am 05.11.2015 um 14:21 schrieb Mike Hammett:
Keep in mind that most do not optimize inbound traffic, only outbound.




-----
Mike Hammett
Intelligent Computing Solutions
http://www.ics-il.com

----- Original Message -----

From: "Paras" <paras () protrafsolutions com>
To: nanog () nanog org
Sent: Thursday, November 5, 2015 2:03:41 AM
Subject: Internap route optimization

Does anyone know or have any experience with Internap's route
optimization? Is it any good?

I've heard of competing solutions as well, such as the one provided by
Noction.

Thanks for your input,
Paras




------------------------------

Message: 5
Date: Thu, 5 Nov 2015 15:00:53 +0100
From: Fred Hollis <fred () web2objects com>
To: nanog () nanog org
Subject: Re: Internap route optimization
Message-ID: <563B6115.808 () web2objects com>
Content-Type: text/plain; charset=utf-8; format=flowed

Border6 offers such an option based on prependings and bgp communities.
But honestly, I didn't test that feature yet. And I'm not that sure how
much sense it makes since it probably requires quite a lot global BGP
updates... makes routers even more busy than they are currently.

On 05.11.2015 at 14:56 Sebastian Spies wrote:
Hey Mike,

do you know route optimizers that actually do optimize inbound traffic?
We, at datapath.io, are currently working on this and could not find
another one that does it.

Best,
Sebastian

Am 05.11.2015 um 14:21 schrieb Mike Hammett:
Keep in mind that most do not optimize inbound traffic, only outbound.




-----
Mike Hammett
Intelligent Computing Solutions
http://www.ics-il.com

----- Original Message -----

From: "Paras" <paras () protrafsolutions com>
To: nanog () nanog org
Sent: Thursday, November 5, 2015 2:03:41 AM
Subject: Internap route optimization

Does anyone know or have any experience with Internap's route
optimization? Is it any good?

I've heard of competing solutions as well, such as the one provided by
Noction.

Thanks for your input,
Paras




------------------------------

Message: 6
Date: Thu, 5 Nov 2015 06:17:33 -0500
From: Eric Van Tol <eric () atlantech net>
To: NANOG <nanog () nanog org>
Subject: RE: Internap route optimization
Message-ID:
        <2C05E949E19A9146AF7BDF9D44085B86720A812942@exchange.aoihq.local>
Content-Type: text/plain; charset="utf-8"

TL;DR: Not worth it unless you have only a few transit providers and are a
content-heavy network with little inbound traffic.

We used the Internap FCP for a long time (10 or so years). In general, we
were satisfied with it, but honestly, after not having it in our network
for the past year and a half, we really don't notice a difference. We
primarily purchased it to keep transit costs down, but as we kept boosting
our minimums with providers, it became less and less about transit costs
and more about performance.

Boxes like these really work best if your network is a content-heavy
network (more outbound than inbound). Sure, it will route around poorly
performing paths, but IMO it's not worth the money and yearly maintenance
fees just for this. I always said that it must be doing a good job since we
never got complaints about packet loss in an upstream network, but now that
the device is gone, we still don't get complaints about packet loss in an
upstream's network. :-/

The biggest problem that we found was that it just was not actively
developed (at the time, not sure about now). New software features were
non-existent for years. Bugs were not fixed in a timely manner. Given what
we were paying in yearly maintenance fees, it just wasn't worth it to keep
around. It also wasn't scalable as we kept adding more transit interfaces,
given that there were a fixed amount of capture ports. Adding non-transit
peering into the mix was also complicated and messed with the route
decision algorithms. Maybe things have changed.

As far as technicals, it seemed to work fine. One of the really only
annoying things about it were remote users who think that a UDP packet
hitting their firewall from its automatic traceroute mechanism were 'DDoS'
and threats of lawyers/the wrath of god almighty would come down upon us
for sending unauthorized packets to their precious and delicate network.
You would definitely also want to make sure that you filter announcements
so you don't accidentally start sending longer paths to your upstreams or
customer peers, but if you run BGP, you already do that, amirite?!

-evt

-----Original Message-----
From: NANOG [mailto:nanog-bounces () nanog org] On Behalf Of Paras
Sent: Thursday, November 05, 2015 3:04 AM
To: nanog () nanog org
Subject: Internap route optimization

Does anyone know or have any experience with Internap's route
optimization? Is it any good?

I've heard of competing solutions as well, such as the one provided by
Noction.

Thanks for your input,
Paras


------------------------------

Message: 7
Date: Thu, 5 Nov 2015 02:38:44 +0000
From: Alex Forster <alex () alexforster com>
To: "nanog () nanog org" <nanog () nanog org>
Subject: Re: AT&T Wholesale
Message-ID: <D2602A95.2DAA%alex () alexforster com>
Content-Type: text/plain; charset="iso-8859-1"

Actually, I'd appreciate pointers to a good rep with AT&T or Verizon for
Layer 2 services as well ? no preference for which region of the country.
Thanks to anyone who can help!

Alex Forster



On 11/4/15, 12:53 PM, "NANOG on behalf of Sam Norris"
<nanog-bounces () nanog org on behalf of Sam () SanDiegoBroadband com> wrote:

Hey everyone,

Can someone send me privately the contact info for an AT&T Wholesale rep
for
Metro E / VPLS / Layer 2 stuff here in the SouthWest region?  Their
website is
not very informative on how to make any contact with the wholesale group.

Thx,
Sam




------------------------------

Message: 8
Date: Thu, 5 Nov 2015 16:48:51 -0500
From: Eric Dugas <edugas () unknowndevice ca>
To: nanog () nanog org
Subject: Long-haul 100Mbps EPL circuit throughput issue
Message-ID:
        <CALKrK4na=wKSi=
vf4EgPetoXJxaHTxfGt8HXYr6CE+xPk1Vy4g () mail gmail com>
Content-Type: text/plain; charset=UTF-8

Hello NANOG,

We've been dealing with an interesting throughput issue with one of our
carrier. Specs and topology:

100Mbps EPL, fiber from a national carrier. We do MPLS to the CPE providing
a VRF circuit to our customer back to our data center through our MPLS
network. Circuit has 75 ms of latency since it's around 5000km.

Linux test machine in customer's VRF <-> SRX100 <-> Carrier CPE (Cisco
2960G) <-> Carrier's MPLS network <-> NNI - MX80 <-> Our MPLS network <->
Terminating edge - MX80 <-> Distribution switch - EX3300 <-> Linux test
machine in customer's VRF

We can full the link in UDP traffic with iperf but with TCP, we can reach
80-90% and then the traffic drops to 50% and slowly increase up to 90%.

Any one have dealt with this kind of problem in the past? We've tested by
forcing ports to 100-FD at both ends, policing the circuit on our side,
called the carrier and escalated to L2/L3 support. They tried to also
police the circuit but as far as I know, they didn't modify anything else.
I've told our support to make them look for underrun errors on their Cisco
switch and they can see some. They're pretty much in the same boat as us
and they're not sure where to look at.

Thanks
Eric


------------------------------

Message: 9
Date: Thu, 5 Nov 2015 15:19:12 -0800
From: alvin nanog <nanogml () Mail DDoS-Mitigator net>
To: Eric Dugas <edugas () unknowndevice ca>
Cc: nanog () nanog org
Subject: Re: Long-haul 100Mbps EPL circuit throughput issue
Message-ID: <20151105231912.GA17090 () Mail DDoS-Mitigator net>
Content-Type: text/plain; charset=us-ascii


hi eric

On 11/05/15 at 04:48pm, Eric Dugas wrote:
...
Linux test machine in customer's VRF <-> SRX100 <-> Carrier CPE (Cisco
2960G) <-> Carrier's MPLS network <-> NNI - MX80 <-> Our MPLS network <->
Terminating edge - MX80 <-> Distribution switch - EX3300 <-> Linux test
machine in customer's VRF

We can full the link in UDP traffic with iperf but with TCP, we can reach
80-90% and then the traffic drops to 50% and slowly increase up to 90%.

if i was involved with these tests, i'd start looking for "not enough tcp
send
and tcp receive buffers"

for flooding at 100Mbit/s, you'd need about 12MB buffers ...

udp does NOT care too much about dropped data due to the buffers,
but tcp cares about "not enough buffers" .. somebody resend packet#
1357902456 :-)

at least double or triple the buffers needed to compensate for all kinds of
network whackyness:
data in transit, misconfigured hardware-in-the-path, misconfigured iperfs,
misconfigured kernels, interrupt handing, etc, etc

- how many "iperf flows" are you also running ??
        - running dozen's or 100's of them does affect thruput too

- does the same thing happen with socat ??

- if iperf and socat agree with network thruput, it's the hw somewhere

- slowly increasing thruput doesn't make sense to me ... it sounds like
something is cacheing some of the data

magic pixie dust
alvin

Any one have dealt with this kind of problem in the past? We've tested by
forcing ports to 100-FD at both ends, policing the circuit on our side,
called the carrier and escalated to L2/L3 support. They tried to also
police the circuit but as far as I know, they didn't modify anything
else.
I've told our support to make them look for underrun errors on their
Cisco
switch and they can see some. They're pretty much in the same boat as us
and they're not sure where to look at.



------------------------------

Message: 10
Date: Thu, 5 Nov 2015 15:31:39 -0800
From: "Bob Evans" <bob () FiberInternetCenter com>
To: "Eric Dugas" <edugas () unknowndevice ca>
Cc: nanog () nanog org
Subject: Re: Long-haul 100Mbps EPL circuit throughput issue
Message-ID: <2af1f897aece8062f5744396fed1559a.squirrel@66.201.44.180>
Content-Type: text/plain;charset=iso-8859-1

Eric,

I have seen that happen.

1st double check that the gear is truly full duplex....seems like it may
claim it is and you just discovered it is not. That's always been an issue
with manufactures claiming they are full duplex and on short distances
it's not so noticeable.

Try to perf in both directions at the same time and it become obvious.

Thank You
Bob Evans
CTO




Hello NANOG,

We've been dealing with an interesting throughput issue with one of our
carrier. Specs and topology:

100Mbps EPL, fiber from a national carrier. We do MPLS to the CPE
providing
a VRF circuit to our customer back to our data center through our MPLS
network. Circuit has 75 ms of latency since it's around 5000km.

Linux test machine in customer's VRF <-> SRX100 <-> Carrier CPE (Cisco
2960G) <-> Carrier's MPLS network <-> NNI - MX80 <-> Our MPLS network <->
Terminating edge - MX80 <-> Distribution switch - EX3300 <-> Linux test
machine in customer's VRF

We can full the link in UDP traffic with iperf but with TCP, we can reach
80-90% and then the traffic drops to 50% and slowly increase up to 90%.

Any one have dealt with this kind of problem in the past? We've tested by
forcing ports to 100-FD at both ends, policing the circuit on our side,
called the carrier and escalated to L2/L3 support. They tried to also
police the circuit but as far as I know, they didn't modify anything
else.
I've told our support to make them look for underrun errors on their
Cisco
switch and they can see some. They're pretty much in the same boat as us
and they're not sure where to look at.

Thanks
Eric





------------------------------

Message: 11
Date: Thu, 5 Nov 2015 21:17:01 -0500
From: Pablo Lucena <plucena () coopergeneral com>
To: bob () fiberinternetcenter com
Cc: Eric Dugas <edugas () unknowndevice ca>, "NANOG Operators' Group"
        <nanog () nanog org>
Subject: Re: Long-haul 100Mbps EPL circuit throughput issue
Message-ID:
        <CAH+2GVLSUYMo-XGdQc1knYEq1hA13=NOQf4jP=
2fiNFMQoC5aw () mail gmail com>
Content-Type: text/plain; charset=UTF-8

With default window size of 64KB, and a delay of 75 msec, you should only
get around 7Mbps of throughput with TCP.

You would need a window size of about 1MB in order to fill up the 100 Mbps
link.

1/0.75 = 13.333 (how many RTTs in a second)
13.333 * 65535 * 8 = 6,990,225.24 (about 7Mbps)

You would need to increase the window to 1,048,560 KB, in order to get
around 100Mbps.

13.333 * 1,048,560 * 8 = 111,843,603.84 (about 100 Mbps)


*Pablo Lucena*

*Cooper General Global Services*

*Network Administrator*

*Office: 305-418-4440 ext. 130*

*plucena () coopergeneral com <plucena () coopergeneral com>*

On Thu, Nov 5, 2015 at 6:31 PM, Bob Evans <bob () fiberinternetcenter com>
wrote:

Eric,

I have seen that happen.

1st double check that the gear is truly full duplex....seems like it may
claim it is and you just discovered it is not. That's always been an
issue
with manufactures claiming they are full duplex and on short distances
it's not so noticeable.

Try to perf in both directions at the same time and it become obvious.

Thank You
Bob Evans
CTO




Hello NANOG,

We've been dealing with an interesting throughput issue with one of our
carrier. Specs and topology:

100Mbps EPL, fiber from a national carrier. We do MPLS to the CPE
providing
a VRF circuit to our customer back to our data center through our MPLS
network. Circuit has 75 ms of latency since it's around 5000km.

Linux test machine in customer's VRF <-> SRX100 <-> Carrier CPE (Cisco
2960G) <-> Carrier's MPLS network <-> NNI - MX80 <-> Our MPLS network
<->
Terminating edge - MX80 <-> Distribution switch - EX3300 <-> Linux test
machine in customer's VRF

We can full the link in UDP traffic with iperf but with TCP, we can
reach
80-90% and then the traffic drops to 50% and slowly increase up to 90%.

Any one have dealt with this kind of problem in the past? We've tested
by
forcing ports to 100-FD at both ends, policing the circuit on our side,
called the carrier and escalated to L2/L3 support. They tried to also
police the circuit but as far as I know, they didn't modify anything
else.
I've told our support to make them look for underrun errors on their
Cisco
switch and they can see some. They're pretty much in the same boat as
us
and they're not sure where to look at.

Thanks
Eric






------------------------------

Message: 12
Date: Thu, 5 Nov 2015 21:18:38 -0500
From: Pablo Lucena <plucena () coopergeneral com>
To: bob <bob () fiberinternetcenter com>
Cc: Eric Dugas <edugas () unknowndevice ca>, "NANOG Operators' Group"
        <nanog () nanog org>
Subject: Re: Long-haul 100Mbps EPL circuit throughput issue
Message-ID:
        <
CAH+2GV+8kO1LEiCWri2NYyB5GsUbsc02uSTmVjK5+_AQgcVQqQ () mail gmail com>
Content-Type: text/plain; charset=UTF-8

With default window size of 64KB, and a delay of 75 msec, you should only
get around 7Mbps of throughput with TCP.

You would need a window size of about 1MB in order to fill up the 100
Mbps
link.

1/0.75 = 13.333 (how many RTTs in a second)
13.333 * 65535 * 8 = 6,990,225.24 (about 7Mbps)

You would need to increase the window to 1,048,560 KB, in order to get
around 100Mbps.

13.333 * 1,048,560 * 8 = 111,843,603.84 (about 100 Mbps)





?I realized I made a typo:

1/*0.075* = 13.333

not

1/0.75 = 13.333


?


------------------------------

Message: 13
Date: Thu, 5 Nov 2015 20:27:27 -0600
From: Theodore Baschak <theodore () ciscodude net>
To: NANOG Operators' Group <nanog () nanog org>
Subject: Re: Long-haul 100Mbps EPL circuit throughput issue
Message-ID: <4A6A5425-ADA1-49D6-89D5-DE189A53D8FD () ciscodude net>
Content-Type: text/plain;       charset=utf-8

On Nov 5, 2015, at 8:18 PM, Pablo Lucena <plucena () coopergeneral com>
wrote:

?I realized I made a typo:


switch.ch has a nice bandwidth delay product calculator.
https://www.switch.ch/network/tools/tcp_throughput/ <
https://www.switch.ch/network/tools/tcp_throughput/>

Punching in the link spec from the original post, gives pretty much
exactly what you said Pablo, including that it'd get ~6.999 megabits with a
default 64k window.

BDP (100 Mbit/sec, 75.0 ms) = 0.94 MByte
required tcp buffer to reach 100 Mbps with RTT of 75.0 ms >= 915.5 KByte

Theo




------------------------------

Message: 14
Date: Fri, 6 Nov 2015 10:35:13 +1100
From: Greg Foletta <greg () foletta org>
To: alvin nanog <nanogml () mail ddos-mitigator net>
Cc: Eric Dugas <edugas () unknowndevice ca>, nanog () nanog org
Subject: Re: Long-haul 100Mbps EPL circuit throughput issue
Message-ID:
        <CAN5PdK=sUjKMeRXkqGiEVCv5_Fi3kF3pr_gBYBB=-
40S2yjzZg () mail gmail com>
Content-Type: text/plain; charset=UTF-8

Along with recv window/buffer which is needed for your particular
bandwidth/delay product, it appears you're also seeing TCP moving from
slow-start to a congestion avoidance mechanism (Reno, Tahoe, CUBIC etc).

Greg Foletta
greg () foletta org


On 6 November 2015 at 10:19, alvin nanog <nanogml () mail ddos-mitigator net>
wrote:


hi eric

On 11/05/15 at 04:48pm, Eric Dugas wrote:
...
Linux test machine in customer's VRF <-> SRX100 <-> Carrier CPE (Cisco
2960G) <-> Carrier's MPLS network <-> NNI - MX80 <-> Our MPLS network
<->
Terminating edge - MX80 <-> Distribution switch - EX3300 <-> Linux test
machine in customer's VRF

We can full the link in UDP traffic with iperf but with TCP, we can
reach
80-90% and then the traffic drops to 50% and slowly increase up to 90%.

if i was involved with these tests, i'd start looking for "not enough tcp
send
and tcp receive buffers"

for flooding at 100Mbit/s, you'd need about 12MB buffers ...

udp does NOT care too much about dropped data due to the buffers,
but tcp cares about "not enough buffers" .. somebody resend packet#
1357902456 :-)

at least double or triple the buffers needed to compensate for all kinds
of
network whackyness:
data in transit, misconfigured hardware-in-the-path, misconfigured
iperfs,
misconfigured kernels, interrupt handing, etc, etc

- how many "iperf flows" are you also running ??
        - running dozen's or 100's of them does affect thruput too

- does the same thing happen with socat ??

- if iperf and socat agree with network thruput, it's the hw somewhere

- slowly increasing thruput doesn't make sense to me ... it sounds like
something is cacheing some of the data

magic pixie dust
alvin

Any one have dealt with this kind of problem in the past? We've tested
by
forcing ports to 100-FD at both ends, policing the circuit on our side,
called the carrier and escalated to L2/L3 support. They tried to also
police the circuit but as far as I know, they didn't modify anything
else.
I've told our support to make them look for underrun errors on their
Cisco
switch and they can see some. They're pretty much in the same boat as
us
and they're not sure where to look at.




------------------------------

Message: 15
Date: Thu, 5 Nov 2015 23:40:22 -0500
From: William Herrin <bill () herrin us>
To: Pablo Lucena <plucena () coopergeneral com>
Cc: "NANOG Operators' Group" <nanog () nanog org>
Subject: Re: Long-haul 100Mbps EPL circuit throughput issue
Message-ID:
        <
CAP-guGUycg9mcVKYxB_fXXVCLhNpA6unkCmoiODrFmn5i7aDOw () mail gmail com>
Content-Type: text/plain; charset=UTF-8

On Thu, Nov 5, 2015 at 9:17 PM, Pablo Lucena <plucena () coopergeneral com>
wrote:
With default window size of 64KB, and a delay of 75 msec, you should only
get around 7Mbps of throughput with TCP.

Hi Pablo,

Modern TCPs support and typically use window scaling (RFC 1323). You
may not notice it in packet dumps because the window scaling option is
negotiated once for the connection, not repeated in every packet.

Regards,
Bill Herrin


--
William Herrin ................ herrin () dirtside com  bill () herrin us
Owner, Dirtside Systems ......... Web: <http://www.dirtside.com/>


------------------------------

Message: 16
Date: Fri, 6 Nov 2015 00:17:26 -0500
From: Pablo Lucena <plucena () coopergeneral com>
To: William Herrin <bill () herrin us>
Cc: "NANOG Operators' Group" <nanog () nanog org>
Subject: Re: Long-haul 100Mbps EPL circuit throughput issue
Message-ID:
        <CAH+2GVLNx8-rxHe9_cHaAv2oVosf=45bh=
cP6NBesD90Z3EstQ () mail gmail com>
Content-Type: text/plain; charset=UTF-8



Modern TCPs support and typically use window scaling (RFC 1323). You
may not notice it in packet dumps because the window scaling option is
negotiated once for the connection, not repeated in every packet.


Absolutely. Most host OS should support this by now. Some test utilities
however, like iperf (at least the versions I've used) default to a 16 bit
window size though.

The goal of my response was to allude to the fact that TCP relies on
windowing unlike UDP, thus explaining the discrepancies.

This is a good article outlining these details:

https://www.edge-cloud.net/2013/06/measuring-network-throughput/


------------------------------

Message: 17
Date: Fri, 6 Nov 2015 05:12:29 +0000 (UTC)
From: Brandon Wade <brandonwade () yahoo com>
To: "nanog () nanog org" <nanog () nanog org>
Subject: Re: Internap route optimization
Message-ID:
        <671102648.693176.1446786749702.JavaMail.yahoo () mail yahoo com>
Content-Type: text/plain; charset=UTF-8

Does anyone know or have any experience with Internap's route
optimization? Is it any good?

I've heard of competing solutions as well, such as the one
provided by Noction.

Thanks for your input,
Paras

We currently utilize the Border6 solution on our network and are very
happy with it. It optimizes our outbound traffic by injecting more
specifics into our border router to influence the outbound traffic. To
assure we don't re-advertise these more specifics to our downstream/peers
we tag those routes with a community.

The Border6 solution optimizes traffic based upon performance and also
allows us to keep our traffic levels within our commits. It has saved us on
bursting costs, as well as technical support costs since it has virtually
eliminated packet loss complaints.

As far as price, the Border6 solution is by far way more cost effective
verses the quotes we have received from Internap and Noction. The technical
support staff have always gone above and beyond to tweak the solution as
needed as well. Overall we are very impressed and would recommend the
Border6 solution. It more than pays for itself with customer satisfaction
and not needing to staff someone around the clock to manually route around
packet loss, blackholed traffic, etc.

In fact a few weeks ago our appliance died (for reasons not related to the
Border6 solution) and it took a few days for us to provide a replacement
box and we did notice complaints come in for packet loss, etc. A pretty
good indication that it is doing it's job as expected.

Feel free to contact me off list if you have any questions about the
Border6 solution.

Brandon Wade - AS53767
http://as53767.net




------------------------------

Message: 18
Date: Fri, 6 Nov 2015 08:59:33 +0100
From: Seth Mos <seth.mos () dds nl>
To: NANOG list <nanog () nanog org>
Subject: Youtube CDN unreachable over IPv6
Message-ID: <563C5DE5.60505 () dds nl>
Content-Type: text/plain; charset=utf-8

Dear Google,

It appears that one of the Youtube CDN's (in Europe, NL) is not
reachable over IPv6 from AS 20844. Can someone get back to us on this,
the company can't access any of the videos currently, although the
mainpage loads fine (over IPv6).

Kind regards,

Seth

telnet r6---sn-5hne6n76.googlevideo.com 443
Trying 2a00:1450:401c:4::b...
telnet: connect to address 2a00:1450:401c:4::b: Connection timed out
Trying 74.125.100.203...
Connected to r6.sn-5hne6n76.googlevideo.com (74.125.100.203).
Escape character is '^]'.
Connection closed by foreign host.

telnet www.youtube.com 443
Trying 2a00:1450:4013:c01::5d...
Connected to youtube-ui.l.google.com (2a00:1450:4013:c01::5d).
Escape character is '^]'.
Connection closed by foreign host.


End of NANOG Digest, Vol 94, Issue 4
************************************



Current thread: