The Unintended Harms of Cybersecurity

yin乱大合集Interesting research: "Identifying Unintended Harms of Cybersecurity Countermeasures":

Abstract: Well-meaning cybersecurity risk owners will deploy countermeasures (technologies or procedures) to manage risks to their services or systems. In some cases, those countermeasures will produce unintended consequences, which must then be addressed. Unintended consequences can potentially induce harm, adversely affecting user behaviour, user inclusion, or the infrastructure itself (including other services or countermeasures). Here we propose a framework for preemptively identifying unintended harms of risk countermeasures in cybersecurity.The framework identifies a series of unintended harms which go beyond technology alone, to consider the cyberphysical and sociotechnical space: displacement, insecure norms, additional costs, misuse, misclassification, amplification, and disruption. We demonstrate our framework through application to the complex,multi-stakeholder challenges associated with the prevention of cyberbullying as an applied example. Our framework aims to illuminate harmful consequences, not to paralyze decision-making, but so that potential unintended harms can be more thoroughly considered in risk management strategies. The framework can support identification and preemptive planning to identify vulnerable populations and preemptively insulate them from harm. There are opportunities to use the framework in coordinating risk management strategy across stakeholders in complex cyberphysical environments.

Security is always a trade-off. I appreciate work that examines the details of that trade-off.

Posted on June 26, 2020 at7:00 AM ? 0 Comments

Analyzing IoT Security Best Practices

New research: "Best Practices for IoT Security: What Does That Even Mean?" by Christopher Bellman and Paul C. van Oorschot:

Abstract: Best practices for Internet of Things (IoT) security have recently attracted considerable attention worldwide from industry and governments, while academic research has highlighted the failure of many IoT product manufacturers to follow accepted practices. We explore not the failure to follow best practices, but rather a surprising lack of understanding, and void in the literature, on what (generically) "best practice" means, independent of meaningfully identifying specific individual practices. Confusion is evident from guidelines that conflate desired outcomes with security practices to achieve those outcomes. How do best practices, good practices, and standard practices differ? Or guidelines, recommendations, and requirements? Can something be a best practice if it is not actionable? We consider categories of best practices, and how they apply over the lifecycle of IoT devices. For concreteness in our discussion, we analyze and categorize a set of 1014 IoT security best practices, recommendations, and guidelines from industrial, government, and academic sources. As one example result, we find that about 70\% of these practices or guidelines relate to early IoT device lifecycle stages, highlighting the critical position of manufacturers in addressing the security issues in question. We hope that our work provides a basis for the community to build on in order to better understand best practices, identify and reach consensus on specific practices, and then find ways to motivate relevant stakeholders to follow them.

Back in 2017, I catalogued nineteen security and privacy guideline documents for the Internet of Things. Our problem right now isn't that we don't know how to secure these devices, it's that there is no economic or regulatory incentive to do so.

Posted on June 25, 2020 at7:09 AM ? 7 Comments

COVID-19 Risks of Flying

I fly a lot. Over the past five years, my average speed has been 32 miles an hour. That all changed mid-March. It's been 105 days since I've been on an airplane -- longer than any other time in my adult life -- and I have no future flights scheduled. This is all a prelude to saying that I have been paying a lot of attention to the COVID-related risks of flying.

We know a lot more about how COVID-19 spreads than we did in March. The "less than six feet, more than ten minutes" model has given way to a much more sophisticated model involving airflow, the level of virus in the room, and the viral load in the person who might be infected.

Regarding airplanes specifically: on the whole, they seem safer than many other group activities. Of all the research about contact tracing results I have read, I have seen no stories of a sick person on an airplane infecting other passengers. There are no superspreader events involving airplanes. (That did happen with SARS.) It seems that the airflow inside the cabin really helps.

Airlines are trying to make things better: blocking middle seats, serving less food and drink, trying to get people to wear masks. (This video is worth watching.) I've started to see airlines requiring masks and banning those who won't, and not just strongly encouraging them. (If mask wearing is treated the same as the seat belt wearing, it will make a huge difference.) Finally, there are a lot of dumb things that airlines are doing.

This article interviewed 511 epidemiologists, and the general consensus was that flying is riskier than getting a haircut but less risky than eating in a restaurant. I think that most of the risk is pre-flight, in the airport: crowds at the security checkpoints, gates, and so on. And that those are manageable with mask wearing and situational awareness. So while I am not flying yet, I might be willing to soon. (It doesn't help that I get a -1 on my COVID saving throw for type A blood, and another -1 for male pattern baldness. On the other hand, I think I get a +3 Constitution bonus. Maybe, instead of sky marshals we can have high-level clerics on the planes.)

And everyone: wear a mask, and wash your hands.

Posted on June 24, 2020 at 12:32 PM ? 42 Comments

Cryptocurrency Pump and Dump Scams

Really interesting research: "An examination of the cryptocurrency pump and dump ecosystem":

Abstract: The surge of interest in cryptocurrencies has been accompanied by a proliferation of fraud. This paper examines pump and dump schemes. The recent explosion of nearly 2,000 cryptocurrencies in an unregulated environment has expanded the scope for abuse. We quantify the scope of cryptocurrency pump and dump schemes on Discord and Telegram, two popular group-messaging platforms. We joined all relevant Telegram and Discord groups/channels and identified thousands of different pumps. Our findings provide the first measure of the scope of such pumps and empirically document important properties of this ecosystem.

Posted on June 24, 2020 at6:30 AM ? 3 Comments

Nation-State Espionage Campaigns against Middle East Defense Contractors

Report on espionage attacks using LinkedIn as a vector for malware, with details and screenshots. They talk about "several hints suggesting a possible link" to the Lazarus group (aka North Korea), but that's by no means definite.

As part of the initial compromise phase, the Operation In(ter)ception attackers had created fake LinkedIn accounts posing as HR representatives of well-known companies in the aerospace and defense industries. In our investigation, we've seen profiles impersonating Collins Aerospace (formerly Rockwell Collins) and General Dynamics, both major US corporations in the field.

Detailed report.

Posted on June 23, 2020 at6:22 AM ? 6 Comments

Identifying a Person Based on a Photo, LinkedIn and Etsy Profiles, and Other Internet Bread Crumbs

Interesting story of how the police can identify someone by following the evidence chain from website to website.

According to filings in Blumenthal's case, FBI agents had little more to go on when they started their investigation than the news helicopter footage of the woman setting the police car ablaze as it was broadcast live May 30.

It showed the woman, in flame-retardant gloves, grabbing a burning piece of a police barricade that had already been used to set one squad car on fire and tossing it into the police SUV parked nearby. Within seconds, that car was also engulfed in flames.

Investigators discovered other images depicting the same scene on Instagram and the video sharing website Vimeo. Those allowed agents to zoom in and identify a stylized tattoo of a peace sign on the woman's right forearm.

Scouring other images ?-- including a cache of roughly 500 photos of the Philly protest shared by an amateur photographer ?-- agents found shots of a woman with the same tattoo that gave a clear depiction of the slogan on her T-shirt.

[...]

That shirt, agents said, was found to have been sold only in one location: a shop on Etsy, the online marketplace for crafters, purveyors of custom-made clothing and jewelry, and other collectibles....

The top review on her page, dated just six days before the protest, was from a user identifying herself as "Xx Mv," who listed her location as Philadelphia and her username as "alleycatlore."

A Google search of that handle led agents to an account on Poshmark, the mobile fashion marketplace, with a user handle "lore-elisabeth." And subsequent searches for that name turned up Blumenthal's LinkedIn profile, where she identifies herself as a graduate of William Penn Charter School and several yoga and massage therapy training centers.

From there, they located Blumenthal's Jenkintown massage studio and its website, which featured videos demonstrating her at work. On her forearm, agents discovered, was the same distinctive tattoo that investigators first identified on the arsonist in the original TV video.

The obvious moral isn't a new one: don't have a distinctive tattoo. But more interesting is how different pieces of evidence can be strung together in order to identify someone. This particular chain was put together manually, but expect machine learning techniques to be able to do this sort of thing automatically -- and for organizations like the NSA to implement them on a broad scale.

Another article did a more detailed analysis, and concludes that the Etsy review was the linchpin.

Note to commenters: political commentary on the protesters or protests will be deleted. There are many other forums on the Internet to discuss that.

Posted on June 22, 2020 at7:35 AM ? 40 Comments

Security and Human Behavior (SHB) 2020

Today is the second day of the thirteenth Workshop on Security and Human Behavior. It's being hosted by the University of Cambridge, which in today's world means we're all meeting on Zoom.

SHB is a small, annual, invitational workshop of people studying various aspects of the human side of security, organized each year by Alessandro Acquisti, Ross Anderson, and myself. The forty or so attendees include psychologists, economists, computer security researchers, sociologists, political scientists, criminologists, neuroscientists, designers, lawyers, philosophers, anthropologists, business school professors, and a smattering of others. It's not just an interdisciplinary event; most of the people here are individually interdisciplinary.

Our goal is always to maximize discussion and interaction. We do that by putting everyone on panels, and limiting talks to six to eight minutes, with the rest of the time for open discussion. We've done pretty well translating this format to video chat, including using the random breakout feature to put people into small groups.

I invariably find this to be the most intellectually stimulating two days of my professional year. It influences my thinking in many different, and sometimes surprising, ways.

This year's schedule is here. This page lists the participants and includes links to some of their work. As he does every year, Ross Anderson is liveblogging the talks.

Here are my posts on the first, second, third, fourth, fifth, sixth, seventh, eighth, ninth, tenth, eleventh, and twelfth SHB workshops. Follow those links to find summaries, papers, and occasionally audio recordings of the various workshops. Ross also maintains a good webpage of psychology and security resources.

Posted on June 19, 2020 at2:09 PM ? 4 Comments

New Hacking-for-Hire Company in India

Citizen Lab has a new report on Dark Basin, a large hacking-for-hire company in India.

Key Findings:

  • Dark Basin is a hack-for-hire group that has targeted thousands of individuals and hundreds of institutions on six continents. Targets include advocacy groups and journalists, elected and senior government officials, hedge funds, and multiple industries.

  • Dark Basin extensively targeted American nonprofits, including organisations working on a campaign called #ExxonKnew, which asserted that ExxonMobil hid information about climate change for decades.
  • We also identify Dark Basin as the group behind the phishing of organizations working on net neutrality advocacy, previously reported by the Electronic Frontier Foundation.
  • We link Dark Basin with high confidence to an Indian company, BellTroX InfoTech Services, and related entities.
  • Citizen Lab has notified hundreds of targeted individuals and institutions and, where possible, provided them with assistance in tracking and identifying the campaign. At the request of several targets, Citizen Lab shared information about their targeting with the US Department of Justice (DOJ). We are in the process of notifying additional targets.

BellTroX InfoTech Services has assisted clients in spying on over 10,000 email accounts around the world, including accounts of politicians, investors, journalists and activists.

News article. Boing Boing Tags: , , ,

Posted on June 19, 2020 at6:38 AM ? 3 Comments

Theft of CIA's "Vault Seven" Hacking Tools Due to Its Own Lousy Security

The Washington Post is reporting on an internal CIA report about its "Vault 7" security breach:

The breach -- allegedly committed by a CIA employee -- was discovered a year after it happened, when the information was published by WikiLeaks, in March 2017. The anti-secrecy group dubbed the release "Vault 7," and U.S. officials have said it was the biggest unauthorized disclosure of classified information in the CIA's history, causing the agency to shut down some intelligence operations and alerting foreign adversaries to the spy agency's techniques.

The October 2017 report by the CIA's WikiLeaks Task Force, several pages of which were missing or redacted, portrays an agency more concerned with bulking up its cyber arsenal than keeping those tools secure. Security procedures were "woefully lax" within the special unit that designed and built the tools, the report said.

Without the WikiLeaks disclosure, the CIA might never have known the tools had been stolen, according to the report. "Had the data been stolen for the benefit of a state adversary and not published, we might still be unaware of the loss," the task force concluded.

The task force report was provided to The Washington Post by the office of Sen. Ron Wyden (D-Ore.), a member of the Senate Intelligence Committee, who has pressed for stronger cybersecurity in the intelligence community. He obtained the redacted, incomplete copy from the Justice Department.

It's all still up on WikiLeaks.

Posted on June 18, 2020 at6:34 AM ? 4 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.