The views expressed by contributors are their own and not the view of The Hill

Cybersecurity 101: How we can stop making so many mistakes

Getty Images


The online exercise-tracking map that recently revealed the locations of remote U.S. military outposts, including even the identity of particular soldiers, was a textbook cybersecurity fail, but the Department of Defense is by no means alone in unsuccessfully imagining the swoops and sallies that happen with data (and the innumerable ways it can get stolen, diverted or revealed). 

While the Strava “heat map” was good marketing, there was collateral damage for the DoD. The problem was so simple it could be stated in a few words, and it was by a twenty-year old Australian university student in a tweet. “Not amazing for Op-Sec,” he wrote. “US Bases are clearly identifiable and mappable.”

While the problem was easy to describe, for some reason it wasn’t quite as easy to predict, and it should have been. By issuing FitBit devices to address obesity among the enlisted, the Defense Department inadvertently created a source of intel about troop locations in Afghanistan, Syria and Iraq.

{mosads}Recent news of a low-level discussion in the Trump administration about a government-owned nationwide 5G network was newsworthy because it seemed like a redundancy — AT&T and Verizon are already poised to make 5G a reality. Would Chinese-supplied hardware (such as integrated circuitry) be used to power that network? If so, how could we protect against Chinese influence?

 

We need to start from the premise that all connected devices are hackable, and reverse engineer from there. The future is not terribly different from our current situation, but it will be defined by a commonly held belief that every device can and will be hacked. 

What would this look like with regard to the Strava reveal?

  • Defining requirements and general vulnerabilities, which includes security and privacy needs. But the first question should be: what is it? (Answer: a small computer with GPS.) The most basic operational requirement would have obviated the FitBit fail: something like, “maintain Operational Security for soldiers who wear the devices.” 
  • Devising a scoring rubric for potential security solutions, testing every system against the requirement. Each potential solution should be scored on security and privacy to allow for reasoned decisions about trade-offs between security and functionality.
  • Building service level agreements with vendors that specify security and privacy, e.g., “the device must not transmit position data at any time.” Requiring a demonstration of compliance during the testing of individual units as well as the entire system.
  • Third party testing for compliance with the requirements and safeguards against general vulnerabilities.
  • Creating and distributing an education and awareness plan regarding security issues associated with a device and/or associated service should be required before deployment. Physical security and common-sense security must also be included, and at the organizational level it might not be a bad idea to devise a pass/fail test for users.
  • Including the device in the broader risk assessment and vulnerability management processes of an organization as it is deployed.

It’s entirely possible that someone in procurement didn’t identify the FitBit as a computer system. The devices may have gotten no more scrutiny than the purchase of sunglasses or flashlights. If that’s what happened, no one in a position to determine the associated risk was aware of the purchase. This sort of surprise can be avoided by making everyone in an organization aware of the security dangers posed by connected devices. 

We need a new approach, one predicated on thinking ahead and baking best security practices into every aspect of our digital lives — from individual mindset, behavior and privacy protocols to the way networks are used and which products we run on them.

In general, we need to always be looking at the “big picture.” This means thinking about what kind of data a device is collecting, how it’s going to be used, and how that data will be protected.

Here’s the greatest hits of what a sea change might look like:

  • Hackers have tried to “break” the device before it gets to market.
  • The software that runs a device can be updated, and the device has self-updating firmware (or prompts the user when an update is available). Device manufacturers know what it will take to make security updates (when post-release flaws are discovered) and have streamlined the process of making those devices secure.
  • Device manufacturers have hired hackers and futurists to imagine how the device could be compromised for the duration of its 2-8 year lifespan. 
  • Android isn’t the default backend operating system kernel, and if it is, it’s at least supported and can be updated.

The sea change we need is a mindset, one that is constantly evolving, and includes the never ending socialization of current security and privacy best practices. It is the institutionalization of reality-based paranoia when it comes to the security of connected devices.

Adam K. Levin is chairman and founder of CyberScout (formerly IDT911) and co-founder of Credit.com. He is a former director of the New Jersey Division of Consumer Affairs and is the author of Swiped: How to Protect Yourself In a World Full of Scammers, Phishers, and Identity Thieves, which debuted at #1 on the Amazon Hot New Releases List.

Tags Adam Levin Computer architecture Computer network security Computer security Computing cybersecurity National security Technology

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

More Cybersecurity News

See All