The speed at which IT and telecommunications is changing is startling. It has many ramifications, and for the most part, they are good: Current and potential subscribers get what they want more quickly and service providers start cashing in more quickly. In short, networks start doing what everyone wants more quickly.
There is one potential negative to the great increase in what some call “service velocity.” Simply, some services may come to market not as fully baked as they were in the past. This can lead to offerings that are rough around the edges and less secure than they need to be.
A story at the MIT Technology Review discusses a paper filed by Virginia Tech College of Engineering with the National Telecommunications and Information Administration (NTIA). The paper suggests that it would be easy and inexpensive for terrorists, criminals or other bad folks to bring down an LTE network by jamming control signals, which are concentrated and vulnerable.
The executive summary of the filing paints a disturbing picture:
If LTE technology is to be used for the air interface of the public safety network, then we should consider the types of jamming attacks that could occur five or ten years from now. It is very possible for radio jamming to accompany a terrorist attack, for the purpose of preventing communications and increasing destruction. Likewise it is possible for criminal organizations to create mayhem among public safety personnel by jamming.
The Gizmodo story on the report points out that 2G and 3G networks still would work during such an attack. That’s good news. The problem, however, is that applications increasingly will rely exclusively on LTE:
The worst part: LTE has been proposed for the new communication system for emergency response. Called FirstNet, it was designed after the many communications problems experienced by first response teams during 9/11. Just imagine the picture: terrorists first attacking a major target and then jamming the communication network used by the emergency forces trying to help. According to Reed, this is specifically what can happen.
This clearly is an issue that must be remedied. The follow-on question is why the problem emerged in the first place. It isn’t discussed in the coverage, but it is reasonable to suggest that rushing to market in an extremely competitive environment is a possible reason for the vulnerability.
In the old days, telecommunications companies would test new services until they were clearly inviolate. Cable companies, for instance, were cautious to a fault. In addition to the rush to market is the reality that the proprietary technology of the past has given way to IP, which is ubiquitous. The Internet is everywhere. That makes it inherently easier for bad actors to develop their attacks and gives them a mechanism for delivering what they come up with.
There really are two stories here: The obvious one is the danger presented by the researchers. Perhaps the good news on that front is in a comment on the story, in which a seemingly knowledgeable individual suggests that the story misses the point and overhypes the danger. Hopefully, that is so. The other side of the story is the higher-level questions of whether technology is being introduced before it is vetted thoroughly enough.