With $37 billion of the $42 billion BEAD program based on the number of locations unserved by broadband, accurately measuring who is unserved is critical — both for the allocation of funds but more importantly so people without access to real broadband can be connected.
It's funny because for months (years?) my colleagues at PAgCASA have spoken about using hardware devices to complement the browser based speed tests.
We always thought that the FCC and ISPs would challenge the tests, because that's been the previous history.
I thought it was mighty bold of so many businesses to do broadband mapping via browser tests with the understanding that the data would be submissible.
That felt like just whistling past the graveyard and ignoring the part of the business model that they did not want to see.
What's the solution? Verify the maps with hardware instruments that are beyond scrutiny of the FCC/ISPs. Human error is tied inextricably with browser tests - they are good for understanding where a problem is, but the final numbers should come from instruments that eliminate the errors in having humans doing browser tests.
The FCC disallowing speed tests is not logical or practical. This applies regardless of the provider "source".
At a more simple logical level, If you cant measure you cant manage. What is the metric if speed tests are not the measure? The current "service" model doesn't make any sense.
It is similar to saying a speedometer isnt accurate because the tire size may vary, and the speedomter isnt 100% accurate. Without an independent measure we cant know what service level actually is.
Treating speed tests with a grain of salt is totally acceptable. We look at samples and if we see a mix of "served" "underserved" and "unserved" (by our definition) we can be sure services are available because some is visible - and there are always variables.
Perhaps they require 10% or even 20% sample rates? Or multiple samples at different times of day and days of week? All providing statistically supportable empirical evidence.
With a 10%+ sample size I believe Brown County can prove that service in specific areas is well below the 25/3 advertised threshold (if we are allowed). We also identify where we need more samples to be sure we are exceeding 10% sample rates. Being rigorous for Speed Tests is good, but denying speed tests leaves a hole.
I'd like to see some rigorous research about how we would aggregate speed tests, at what geography, and what types and how many locations that would add to our unserved counts.
Even if we take the maximum throughput test in a given area, I would still have the concern that it isn't clear whether that is the throughput at the router, or from client devices. We badly need to be able to differentiate between the router throughput and client throughput. That's why the FCC MBA is so valuable - they have controlled samples. They know how they're being run.
There are very different outcomes: if you aren't getting 100/20 at the router even though you pay for it, that shouldn't count and there probably needs to be a new network there. If you are getting the 100/20 that you pay for, you probably need to get a new Wifi access point and take it out of the closet.
here's a random zoom in from that map you provided (which is pretty cool!). This area seems to be red which is very bad on your map. This neighborhood has Charter cable at every address. Their advertised throughput is 1 Gbps/35. I haven't seen evidence that cable providers like this one don't give subscribers what they pay for at the router.
That i am aware of it isnt actually available in that red area (it is nearby which is why it goes to white). It could also be part of the IDW mapping which fuzzes the edge of things We had included that area in a state grant with Charter/Spectrum that was not awarded but scored well by the WI PSC Broadband Grants Office.
Our raw data comes from expressoptimizer (geolabs) which is Mlabs based.
if you zoom to Brown County you can see the raw data for that same area.
It is obvious in the urban/metro areas that appear to be unserved people but they are mixed in with very well served, so it is most likely, choice, cash, local congestion or poor equipment. And we have communicated that to elected officials.
We are reasonably confident in the speed test results due to the high sample rates. But as you pointed out, the SOURCE of the test is critical. At a simpler level, even if they are off by 100% (ie we double all the numbers - those dark red areas still do not exceed 25mbps. Further we have had complaints of terrible service matching the poor speed tests. And those match up with School District sampling, and data from Microsoft that indicates poor service in those areas.
That all adds up to some fairly solid data that I thought was going to be used (even if somewhat untrusted). So it is very frustrating to find out - none of will be accepted by the FCC.
Ultimately the people in those darker red , red and yellow areas on the original map have awful service - regardless if the FCC accepts the speed test or not. We are locally using the map to help prioritize tough decision making, with limited funds, based on something that can be measured (ie speed tests over time).
I of course agree that that is the throughput they're experiencing, and they are unlikely to be having a good experience with the Internet at that throughput.
This is an impressive set of data. I can understand why you're frustrated if you've invested this much time in building it out and it won't be used
It's funny because for months (years?) my colleagues at PAgCASA have spoken about using hardware devices to complement the browser based speed tests.
We always thought that the FCC and ISPs would challenge the tests, because that's been the previous history.
I thought it was mighty bold of so many businesses to do broadband mapping via browser tests with the understanding that the data would be submissible.
That felt like just whistling past the graveyard and ignoring the part of the business model that they did not want to see.
What's the solution? Verify the maps with hardware instruments that are beyond scrutiny of the FCC/ISPs. Human error is tied inextricably with browser tests - they are good for understanding where a problem is, but the final numbers should come from instruments that eliminate the errors in having humans doing browser tests.
Excellent article as always. Now my 2 cents.
The FCC disallowing speed tests is not logical or practical. This applies regardless of the provider "source".
At a more simple logical level, If you cant measure you cant manage. What is the metric if speed tests are not the measure? The current "service" model doesn't make any sense.
It is similar to saying a speedometer isnt accurate because the tire size may vary, and the speedomter isnt 100% accurate. Without an independent measure we cant know what service level actually is.
Treating speed tests with a grain of salt is totally acceptable. We look at samples and if we see a mix of "served" "underserved" and "unserved" (by our definition) we can be sure services are available because some is visible - and there are always variables.
Perhaps they require 10% or even 20% sample rates? Or multiple samples at different times of day and days of week? All providing statistically supportable empirical evidence.
In Brown County WI if we see areas where every single test is well below 25mbps (in some cases below 5mbs) that leads to the conclusion that that area is UNSERVED by our definition and 99% chance it is actually unserved by the IIJA definition. https://browncounty.maps.arcgis.com/apps/webappviewer/index.html?id=ae44a0c299554f7ea4e2561d82700451
With a 10%+ sample size I believe Brown County can prove that service in specific areas is well below the 25/3 advertised threshold (if we are allowed). We also identify where we need more samples to be sure we are exceeding 10% sample rates. Being rigorous for Speed Tests is good, but denying speed tests leaves a hole.
Thoughts?
I'd like to see some rigorous research about how we would aggregate speed tests, at what geography, and what types and how many locations that would add to our unserved counts.
Even if we take the maximum throughput test in a given area, I would still have the concern that it isn't clear whether that is the throughput at the router, or from client devices. We badly need to be able to differentiate between the router throughput and client throughput. That's why the FCC MBA is so valuable - they have controlled samples. They know how they're being run.
There are very different outcomes: if you aren't getting 100/20 at the router even though you pay for it, that shouldn't count and there probably needs to be a new network there. If you are getting the 100/20 that you pay for, you probably need to get a new Wifi access point and take it out of the closet.
https://imgur.com/a/tjXVQjp
here's a random zoom in from that map you provided (which is pretty cool!). This area seems to be red which is very bad on your map. This neighborhood has Charter cable at every address. Their advertised throughput is 1 Gbps/35. I haven't seen evidence that cable providers like this one don't give subscribers what they pay for at the router.
That i am aware of it isnt actually available in that red area (it is nearby which is why it goes to white). It could also be part of the IDW mapping which fuzzes the edge of things We had included that area in a state grant with Charter/Spectrum that was not awarded but scored well by the WI PSC Broadband Grants Office.
Our raw data comes from expressoptimizer (geolabs) which is Mlabs based.
https://expressoptimizer.net/projects/Wisconsin/speedtestmap.php
if you zoom to Brown County you can see the raw data for that same area.
It is obvious in the urban/metro areas that appear to be unserved people but they are mixed in with very well served, so it is most likely, choice, cash, local congestion or poor equipment. And we have communicated that to elected officials.
We are reasonably confident in the speed test results due to the high sample rates. But as you pointed out, the SOURCE of the test is critical. At a simpler level, even if they are off by 100% (ie we double all the numbers - those dark red areas still do not exceed 25mbps. Further we have had complaints of terrible service matching the poor speed tests. And those match up with School District sampling, and data from Microsoft that indicates poor service in those areas.
That all adds up to some fairly solid data that I thought was going to be used (even if somewhat untrusted). So it is very frustrating to find out - none of will be accepted by the FCC.
Ultimately the people in those darker red , red and yellow areas on the original map have awful service - regardless if the FCC accepts the speed test or not. We are locally using the map to help prioritize tough decision making, with limited funds, based on something that can be measured (ie speed tests over time).
I appreciate the feedback.
I of course agree that that is the throughput they're experiencing, and they are unlikely to be having a good experience with the Internet at that throughput.
This is an impressive set of data. I can understand why you're frustrated if you've invested this much time in building it out and it won't be used