nanog mailing list archives

Re: [outages] Twelve99 / AWS usw2 significant loss


From: Andras Toth <diosbejgli () gmail com>
Date: Sat, 27 Jan 2024 11:43:45 +1100

Seems like the destination is in Hetzner, they could also raise it with
Twelve99 or prepend routes to use an alternate path.


On Fri, Jan 26, 2024 at 7:46 PM Phil Lavin via Outages <outages () outages org>
wrote:

Thanks, ytti. I have raised a case with AWS but I expect it to be as
unproductive as usual. In this type of issue, there is often a friendly
Engineer keeping an eye on NANOG list or IRC channel who can mitigate it.
Hoping that to be the case this time, also.


On 26 Jan 2024, at 08:40, Saku Ytti <saku () ytti fi> wrote:

On Fri, 26 Jan 2024 at 10:23, Phil Lavin via NANOG <nanog () nanog org>
wrote:


88.99.88.67 to 216.147.3.209:
Host                                               Loss%   Snt   Last
 Avg  Best  Wrst StDev
1. 10.88.10.254                                     0.0%   176    0.2
 0.1   0.1   0.3   0.1
7. nug-b1-link.ip.twelve99.net                      0.0%   176    3.3
 3.5   3.1  24.1   1.6
8. hbg-bb2-link.ip.twelve99.net                    86.9%   175   18.9
18.9  18.7  19.2   0.1
9. ldn-bb2-link.ip.twelve99.net                    92.0%   175   30.5
30.6  30.4  30.8   0.1
10. nyk-bb1-link.ip.twelve99.net                     4.6%   175
 99.5  99.5  99.3 100.1   0.2
11. sjo-b23-link.ip.twelve99.net                    56.3%   175  296.8
306.0 289.7 315.0   5.5
12. amazon-ic-366608.ip.twelve99-cust.net           80.5%   175  510.0
513.5 500.7 539.7   8.4

This implies the problem is not on this path, because #10 is not
experiencing it, possibly because it happens to return a packet via
another option, but certainly shows the problem didn't happen in this
direction yet at #10, but because #8 and #9 saw it, they already saw
it on the other direction.


44.236.47.236 to 178.63.26.145:
Host                                             Loss%   Snt   Last
 Avg  Best  Wrst StDev
1. ip-10-96-50-153.us-west-2.compute.internal     0.0%   267    0.2
 0.2   0.2   0.4   0.0
11. port-b3-link.ip.twelve99.net                   0.0%   267    5.8
 5.9   5.6  11.8   0.5
12. palo-b24-link.ip.twelve99.net                  4.9%   267   21.1
21.5  21.0  58.4   3.1
13. sjo-b23-link.ip.twelve99.net                   0.0%   266   21.4
22.7  21.3  86.2   6.5
14. nyk-bb1-link.ip.twelve99.net                  58.1%   266  432.7
422.7 407.2 438.5   6.5
15. ldn-bb2-link.ip.twelve99.net                  98.1%   266  485.6
485.4 481.6 491.1   3.9
16. hbg-bb2-link.ip.twelve99.net                  92.5%   266  504.1
499.8 489.8 510.1   5.9
17. nug-b1-link.ip.twelve99.net                   55.5%   266  523.5
519.6 504.4 561.7   7.6
18. hetzner-ic-340780.ip.twelve99-cust.net        53.6%   266  524.4
519.2 506.0 545.5   6.9
19. core22.fsn1.hetzner.com                       70.2%   266  521.7
519.2 498.5 531.7   6.6
20. static.213-239-254-150.clients.your-server.de 33.2%   266  382.4
375.4 364.9 396.5   4.1
21. static.145.26.63.178.clients.your-server.de   62.0%   266  529.9
518.4 506.9 531.3   6.1

This suggests the congestion point is from sjo to nyk, in 1299, not AWS
at all.

You could try to fix SPORT/DPORT, and do several SPORT options, to see
if loss goes away with some, to determine if all LAG members are full
or just one.


At any rate, this seems business as usual, sometimes internet is very
lossy, you should contact your service provider, which I guess is AWS
here, so they can contact their service provider or 1299.

--
 ++ytti

_______________________________________________
Outages mailing list
Outages () outages org
https://puck.nether.net/mailman/listinfo/outages


Current thread: