nanog mailing list archives

Re: New minimum speed for US broadband connections


From: Mike Hammett <nanog () ics-il net>
Date: Tue, 1 Jun 2021 16:56:39 -0500 (CDT)

For something "future-proof" you have to run fiber. Rural fiber would cost $5 - $10/ft. That's $26k - $52k per mile. 
Most rural roads around here have 2 - 3 houses per mile. I'm sure the more rural you go, the less you have. That's one 
hell of an install cost per home passed. 


Failing that, you go to WISPs and LEO. In the uber rural areas, LEO is the only thing that makes sense. In the middle 
areas, WISPs are and will be the only viable option. There's limited available spectrum. It doesn't help when the FCC 
neuters WISP-friendly (well, really, friendly to any organization not a mobile wireless provider) auctions to make it 
more friendly to the mobile guys. Then, they made all of the other middle-band auctions effectively exclusive to mobile 
wireless plays. Shows how much they cared about making broadband available to people. 




WISPs *CAN* deliver 100/20 (or better) service in LOS (line of sight) environments. In foliage-heavy environments, 
WISPs won't fare as good, but then neither will LEO. All that can get those kinds of speeds into foliage-heavy 
environments is geostationary (with appropriate usage of chainsaws) and cables of some kind. Obviously, current 
consumer geostationary can't do those kinds of speeds, but that's oversubscription. 




So for a WISP or for LEO, you're looking at $500/home for costs (well, Starlink's raw costs are much higher, but that's 
what they charge per sub. WISPs don't charge that much per sub for an install cost, but that's likely to be more 
representative of the all-in cost for a high-capacity, scalable system). Compare that to the $10k - $20k per home for 
rural fiber. 


Requiring a 100 meg upload really changes up the dynamics of the WISP capabilities, resulting in fiber-only at a cost 
increase of 20x - 40x... for something that isn't needed. 






My performance assumptions are based on households, as that's how people exist. Recall my earlier usage information 
compared to the number of people and usage habits on my Internet connection. I just counted 75 devices in my regular 
"Internet" VLAN. That's excluding anyone on the guest WiFi, excluding the cameras VLAN, and I'll admit that I got lazy 
on my IOT VLAN and that most of those devices are on my regular Internet VLAN. I'm not trying to brag as I know some of 
you will have more. My point was that I'm not out of touch with how many devices people have. 

Game consoles? Not much there for upload. Yes, they'll occasionally have big downloads (and getting bigger every day). 
Same with OS updates. They happen, but not every day. 
Cloud backups? Once the initial seed is done, cloud backups are pretty small. All of my mobile devices backup to 
multiple cloud services (I think three). 








I think the IEEE has taken *FAR* too long to push transmit synchronization into the WiFi standard. AX has a little bit 
of it, but it's not a requirement. I envision a world where the ISP pushes timing out via something *like* 1588 that 
then causes all of their subscriber's APs to transmit at the same time, greatly reducing interference potential. It's 
the only way to scale outdoor fixed wireless. Why can't WiFi do that too? 






As has been said multiple times, fixing in-home WiFi would do more for people's QOE than moving their upload from 20 
megs to 100 megs. 




----- 
Mike Hammett 
Intelligent Computing Solutions 
http://www.ics-il.com 

Midwest-IX 
http://www.midwest-ix.com 

----- Original Message -----

From: "Jim Troutman" <jamesltroutman () gmail com> 
To: "Mike Hammett" <nanog () ics-il net> 
Cc: "Christopher Morrow" <morrowc.lists () gmail com>, "nanog list" <nanog () nanog org> 
Sent: Tuesday, June 1, 2021 1:36:13 PM 
Subject: Re: New minimum speed for US broadband connections 



Mike, 


I know you have a lot of experience in this. 
I have built several networks and owned ISPs, too. 



How is it really all that more expensive to offer higher Internet speeds? 


The cost of the Internet bits per subscriber ceased being a major consideration in most budgets about 10 years ago. 
Staffing, trucks, billing and rents cost far more. I’m way more upset about having to hire another FTE or buy a truck 
then having to add another transit or peering port. 


Oh, wait, if you are still using last century technology to deliver last the last mile, I can see the problem. You 
cannot get enough bits to the subscriber without a large investment to replace the last mile. 


Most customers WILL notice a difference between a 10mbit and 1Gig connection day to day. 



Your performance assumptions seem to be based on there is only ever being a single traffic flow over that connection, 
from a single endpoint. 



Typical subscriber usage isn’t anything remotely like that anymore. It is several to dozens of devices and usually 
every resident using bandwidth simultaneously when they are home. Plus all the background downloads of smartphone 
updates, massive content updates on all the game consoles, operating system updates, all those cloud backups, plus IoT 
devices like cameras with cloud DVRs. 



You may not like all these devices and we can debate their usefulness, but the fact is, consumers are buying them and 
using them, and when things don’t work well, the belief is “‘my ISP sucks”, even if that isn’t entirely true. 


My strongly held opinion is that fiber optic cable to the premises is the best and only long term viable technology for 
“Broadband” , with 30 years or more projected life span. Everyone who has grid tied electrical service should get to 
have fiber if they want it. 


I also believe that ISPs need to manage the customer’s WiFi most of the time, because it is a is huge part of the 
end-user’s quality of experience. WiFi 6E will go a long way towards reducing interference and channel congestion and 
making “auto channel” actually work, but will still be another 2-3 years before it is really common. 


Fiber optic networks operated in a competent way are always going to win compared to any other technology. It is just a 
matter of time. 







On Tue, Jun 1, 2021 at 1:34 PM Mike Hammett < nanog () ics-il net > wrote: 





"Why is 100/100 seen as problematic to the industry players?" 


In rural settings, it's low density, so you're spending a bunch of money with a low probability of getting any return. 
Also, a low probability that the customer cares. 




" There's an underlying, I think, assumption that people won't use access speed/bandwidth that keeps coming up." 


On a 95th% basis, no, they don't use it. 


On shorter time spans, sure. Does it really matter, though? If I can put a 100 meg file into Dropbox in under a second 
versus 10 seconds, does that really matter? If Netflix gets my form submission in 0.01 seconds instead of .1 seconds, 
does it matter? 




I think you'll find few to argue against "faster is better." The argument is at what price? At what perceived benefit? 




Show me an average end-user that can tell the difference between a 10 meg upload and a 1 gig upload, aside from 
media-heavy professionals or the one-time full backup of a phone, PC, etc. Okay, show me two of them, ten of them... 




99% of the end-users I know can't tell the difference in any amount of speed above 5 megs. It then just either works or 
doesn't work. 



----- 
Mike Hammett 
Intelligent Computing Solutions 
http://www.ics-il.com 

Midwest-IX 
http://www.midwest-ix.com 



From: "Christopher Morrow" < morrowc.lists () gmail com > 
To: "Mike Hammett" < nanog () ics-il net > 
Cc: aaron1 () gvtc com , "nanog list" < nanog () nanog org > 
Sent: Tuesday, June 1, 2021 12:14:43 PM 
Subject: Re: New minimum speed for US broadband connections 










On Tue, Jun 1, 2021 at 12:44 PM Mike Hammett < nanog () ics-il net > wrote: 

<blockquote>


That is true, but if no one uses it, is it really gone? 








There's an underlying, I think, assumption that people won't use access speed/bandwidth that keeps coming up. 
I don't think this is an accurate assumption. I don't think it's really ever been accurate. 


There are a bunch of examples in this thread of reasons why 'more than X' is a good thing for the end-user, and that 
average usage over time is a bad metric to use in the discussion. At the very least the ability to get around/out-of 
serialization delays and microburst behavior is beneficial to the end-user. 


Maybe the question that's not asked (but should be) is: 
"Why is 100/100 seen as problematic to the industry players?" 




</blockquote>
-- 





Jim Troutman, 

jamesltroutman () gmail com 

Pronouns: he/him/his 
207-514-5676 (cell) 

Current thread: