Interview with April Falcon Doss, Author of Cyber Privacy: Who Has Your Data and Why You Should Care

Cyber Privacy billboard

Tens of thousands of days ago, in the pre-IT days of adolescence, your sense of privacy likely evolved from the first time you locked your bedroom door or found a double-secret, under-the-pillow hiding place for your diary. To avoid the prying eyes of your little brother required such ingenious tactics.

Cyber Privacy cover
Alas, in the cyber age, Big Brother, Google, Facebook, and countless other snoopy parties aren’t so easily put off—your habits, health, diet, hobbies, and preferences are the stuff of their intense interest. To them, you are the product and they sell your personal information with capitalistic abandon. Truth be told, if you have an online presence, you can be certain marketers know more about you than your family does.

In Cyber Privacy, April Falcon Doss offers a thorough, eye-opening, even chilling at times, overview of our cyber risks, both existing and developing—a huge accomplishment, as well as a service to all of us who are concerned about personal privacy in a rapidly evolving digital world.

April was gracious enough to agree to answer a few of our most pressing follow-up questions.

Want to know who has your data and why you should care? This is your guide. Order Today Cyber Privacy

What do you say to the people who shrug their shoulders, believe that cyber threats are exaggerated, and anyway assume the government has their back?

It’s absolutely appropriate for people to have different levels of concern about cyber threats, whether those worries focus on personal privacy, the risk of catastrophic data breaches, or cyber espionage by foreign governments. But I think it’s helpful to be well-informed before shrugging off those risks—and that’s really what this book is intended to do: to bring together in one place an overview of all the different major areas of data privacy risk, so that people who might be familiar with one aspect of these issues can easily become as well informed on other dimensions as well. I find that people are often surprised to find out exactly how much personal information they’re giving away, who it’s going to, and how it’s being used.

For people who assume the government will be able to protect them, that’s really a bit of a misconception. The government is only authorized to protect government networks—and that’s for good reason. Protecting against cyber attacks shouldn’t open the door to privacy intrusions. If, say, the government were monitoring the networks of private companies or of individuals, even in order to protect against cyber crime, that would raise a lot of important concerns about civil liberties and government overreach. So it’s a delicate balance to strike in making sure that government knowledge and technical expertise can benefit the private sector without intruding on important privacy rights.

Will you give us a sense of who the good guys are who the bad guys are in the cyber-privacy battles? Is the newly-enacted California Consumer Protection Act a big improvement?

If only it were as simple as a clear-cut set of good guys and bad guys! The key to understanding the privacy landscape is knowing that there are a lot of different players with different motivations, and everyone—from technology companies to government agencies—thinks their actions are justified. That’s why it’s so important for all of us as citizens of this digital landscape to be sufficiently well informed that we can make our own judgments about whether a particular company is one who we want to buy from, whether a particular social media platform is one we want to engage on, or whether a particular government program—for law enforcement, immigration, education, health care, etc.—is making responsible use of our data.

In an economic context, it’s really about understanding what our privacy is worth, and deciding whether we’re going to demand more protection and better confidentiality from the companies whose apps and devices we use. Up until recently, consumers haven’t had much meaningful choice, as a small number of tech companies—the Big Four, as I refer to them in the book—have had such dominant market power.

The adoption of the California Consumer Privacy Act (CCPA), which took effect in 2020, could prove to be a really pivotal moment in American culture when it comes to privacy. CCPA took a European-style approach to privacy, establishing new rights for California residents to know how their data is being collected and used and to prevent companies from selling their information. It also imposes new obligations on companies that handle personal data, with the risk of fines and lawsuits to give teeth to the law. These new rights and obligations apply to the personal data of some forty million people, in the world’s fifth largest economy. There’s clearly an appetite for this kind of data protection, as the law was just strengthened by new amendments in the November 2020 election. With the passage of CCPA, we’re seeing other states consider similar laws, and as a result we’re seeing momentum grow for a federal privacy law—a topic that’s gained growing bipartisan interest and is sure to be debated by the new Congress in 2021.

In the intro to Cyber Privacy, you write: “Privacy is intrinsic to individual dignity and our sense of personhood, to our ability to live as unique beings. Privacy allows us to test our ideas, to live without undue scrutiny. It lets us choose our relationships, overcome our pasts and direct our future, and change our minds and our behavior over time.” With so much government and corporate surveillance happening right now, do you think certain individuals, perhaps even certain generations, have a different view of privacy? As a society, is our sense of privacy changing or adapting? Please help us understand what “privacy” means?

That last question—what “privacy” means—is the hardest! “Privacy” has really become an overloaded term; we have a gut sense that we know it when we see it, but we have trouble really articulating it, and differentiating between its many facets. This matters because if you don’t know precisely what you’re trying to protect, it’s very hard to do that, and very hard to be sensible in balancing privacy with other societal goals, like technology innovation, access to information, public safety, and so on. Some of the key components of privacy are the right to be left alone—the right to prevent others from intruding on your thoughts and communications and living space; it’s also the right to keep sensitive information about yourself confidential and being able to control who can have access to it; it’s the right to move anonymously through the world, to walk or drive down the street without being tracked and having our actions recorded and analyzed; it’s the right to be free from being individually targeted by people—governments or corporations—who are using what they know about us to support propaganda and manipulative marketing campaigns.

There does seem to be a real difference across generations in how people view privacy. For example, we know that older people are more likely to be tricked by scams and online disinformation, thinking that messaging that’s targeted towards them must be more reliable. As a result, older people are far more likely than young ones to fall for, and share, fake news on social media. Younger people tend to be much savvier about the connected environment and the ways that information propagates across the complex digital ecosystem; sometimes this results in a cynical sense that all privacy has been lost, but just as often it results in mindful curation of an online presence from an early age.

When asked if they’re worried about privacy, people often say, “I don’t have anything to hide.” But that’s really only the tip of the iceberg. Privacy isn’t just about whether there is information we feel is personal, or whether there are things we’re ashamed of; the real heart of information privacy today isn’t simply what other people—or companies, or government agencies—know about us, but how they’re using that information for their own profit or benefit, and sometimes how they’re using that information against us.

These are areas where there really needs to be a multi-pronged approach to tackling the challenges of building digital literacy and privacy awareness. It isn’t enough to focus on K-12 education; we need to be equally focused on outreach to adults and to different demographic groups in order to equip people to navigate the digital world in ways that they fully understand and are comfortable with. Libraries, senior centers, community colleges, government grant programs, and the arts can all play a key role in this. Otherwise we’ll continue to see news reports on a near-daily basis about some new privacy-related lawsuit or data breach or surveillance activity which had been going on for some time but about which the people affected are quoted as saying, “If I’d known, I never would have agreed to that!”

You write about the shortcomings of the Health Insurance Portability and Accountability Act (HIPAA), mainly due to the fact that it was written so long ago and the digital world has swept past its protections. As just one example, can you talk about how fitness wearables like Fitbit are such a privacy liability?

This is one of the most common misconceptions in privacy: people mistakenly assume that because information relates to health, it must be protected by law. Nothing could be further from the truth. HIPAA relies on a very narrow set of definitions: it only applies to a few categories of businesses, such as medical offices and health insurance companies, and it only protects a narrow set of information, such as records relating to diagnoses, treatment, or payment for health care. Companies that make fitness trackers and smartwatches and health and wellness apps generally aren’t covered by HIPAA, and the data collected by all of those apps and devices generally isn’t, either. As a result, whether it’s the number of steps we take per day or our resting heart rate, our sleep patterns, diet, blood pressure, blood sugar, menstrual cycles, or emotional state or moods, the vast majority of the information that we track through wellness apps and devices isn’t protected under HIPAA—and most of it isn’t protected by any other laws in the US, either. The result is that all too often consumers using fitness apps don’t realize how that information is being used and don’t have good insights into who it’s being shared with or how or why.

That’s not to say people shouldn’t take advantage of all the positive aspects of fitness wearables and wellness apps—but in deciding which ones to use and how, it’s important to understand exactly what the company’s data protection practices are and be sure that you’re comfortable with the particular ways that information is going to be used.

Search histories—what we google—can create a very telling record of what we think, believe, know, and much more. Is it true that all of our searches and other online activities are stored, shared with data brokers, and sold to marketers? Are there no laws regulating this type of data collection?

It seems astonishing, doesn’t it? The data broker and digital advertising industry are incredibly robust sectors of the global economy, and all of it rests on access to information about our online habits: what we search for, what we share, etc. On the one hand, we can go to a bookstore or library and browse through unlimited information in person without anyone knowing what section of the encyclopedia we read. But when we go online, two things are fundamentally different: as a technical matter, we need search engines and web browsers to help us find the information we’re looking for, and we need telecommunications infrastructure—an internet connection, website servers, etc.—to help us access that information once we’ve found out where’s located. As a result, there are a host of intermediaries who have to handle our search requests during the nanoseconds it takes for a query we type to retrieve information that we can click on and connect to. Second, many of the companies that facilitate our requests have realized that there is a lot of money to be made by understanding what each of us searches for, and when and why.

From a legal perspective, the laws relating to consumer privacy in the US, and in most places around the world, have largely been based on consent: the idea that we can each make sensible, well-informed decisions about what information we choose to give to companies, and about whether we consent to their terms of use. As a result, when we use services like Google, we are giving consent to Google to use our information in any way that’s consistent with their privacy policy and terms of use. Those policies are usually so long and complex, that most of us click “I accept” without reading them—and we know that even if we did read them, it’s a take-it-or-leave-it choice, so we agree to the terms in order to be able to use the service. If we read them, we’ll see that those policies generally allow internet companies to share data with “partners”—which can include everyone from cloud hosting services to advertising partners, data brokers, and more. The result is that a staggering array of information about us is regularly catalogued, sold, shared, combined, and used by companies that we’ve never heard of, all as part of the digital advertising economy.

What about artificial intelligence? Do you have worries about privacy or security in that sector of technology, as well?

Artificial intelligence, or AI, refers to the use of software algorithms to create new information with a wide range of uses, from making predictions about a person’s future behavior to making assessments of how “engaged” students or teachers look in the classroom, to taking real photographs and transforming them into “deep fake” videos that make it look like the person said or did things that never happened. These kinds of AI algorithms run on personal data, and many of the most troubling issues arise with a type of AI known as machine learning (ML) algorithms. Very often, ML algorithms function as a “black box”—that is, even the computer scientists who write them can’t explain exactly how or why the algorithms make the predictions that they do. The lack of insight into how they produce their outcomes is compounded by the fact that the data sets being used to train the algorithms are often too narrowly focused.

The result is that we’re seeing a great many algorithms pump out biased or inaccurate results. These include facial recognition algorithms that are grossly inaccurate when trying to identify people of color; job recruiting algorithms that prioritize white male job applicants; and algorithms used by parole boards that are more likely to recommend white inmates for release than people of color. The list goes on.

The bottom line is that while AI technologies are useful for some purposes, they’re also being used in ways that disadvantage people who are already in disadvantaged groups, and sometimes even the software’s developers don’t fully understand why. The negative impacts of inequitable outcomes is often made worse by the fact that some people assume that if a prediction is made by a computer, then it must be more accurate or scientific than a human judgment—even though we know that all too often that’s just not the case.

You note that incredible advances in computer power—a trillionfold increase between 1955 and 2015—have led to an equally astounding amount of data being created. Is it possible that these trends will continue indefinitely?

That’s a terrific question, and the truth is, we don’t really know. Moore’s Law—the prediction that computer processing power doubles roughly every two years—has held true for decades. We’ve nearly reached the limits in terms of physical design: computers that rely on electronic circuits can’t get much smaller than they are now. However, there’s a global arms race among nations and corporations to see who will be the first to achieve a major breakthrough in quantum computing, a new approach that could be as great a leap forward from today’s computers as the smartphone is from the adding machine. Quantum computing presents a number of very difficult technical challenges, so there’s no guarantee that it will become feasible in the near future. But if and when it does, it will be a real game changer that will present a whole new level of risk to data privacy, as it will make most forms of encryption that we use today obsolete.

About the countless free apps at our fingertips, you say “when we’re not paying for the product, we are the product.” What do you mean by that?

This saying has become almost a cliché in Silicon Valley, and it’s only slightly less cynical than the saying that there’s a sucker born every minute. What this means is that every time an app or platform gives us a service free of charge, they’re making a profit off the information that we provide when we use their product. Facebook provides its users with a social media platform, but the way it earns revenue is through selling advertising to us—by promising companies who want to find customers that Facebook can provide pinpoint delivery to the specific individuals who are most likely to be interested in their ads. The way they do that is with our data—with the fact that Facebook knows our likely and dislikes, our preferences and demographics, and even our personality types, better than our own family members do.

This business model isn’t limited to Facebook, of course, but Facebook and Google are the dominant players in internet advertising, each of them earning billions of dollars in revenue and profits every quarter from using our data to sell advertising to third parties who are looking for people just like each of us. In other words, we’re the platforms’ users, but we aren’t their customers. Their customers are the third parties who pay them to serve up advertising to us, based on the detailed personal data that those “free” apps and platforms have collected about us.

“Whether you are a technology user, a compliance or privacy officer, or a practicing lawyer, this book will help in understanding the complex intersections of technology, the internet economy, the role of the state, and the uses of personally identifiable information and meta data.” Rick Ledgett, former deputy director, NSA Cyber Privacy April Falcon Doss Available Now

One of the basic facts about maintaining privacy is the ability to recognize fact from fiction in our digital lives, and in Chapter 8 you delve into disinformation, viewpoint manipulation, and propaganda. Please explain why this is an important front in cyber security?

The spread of online disinformation has been one of the most striking, and most dangerous trends of the past five years.

The 2016 US presidential election made clear the power of social media to spread propaganda and disinformation. It’s no wonder—it’s a social media dictum that “content that enrages, engages,” and Facebook’s own former executives have testified that they designed the platform to be addictive, to increase profits by maximizing user engagement. The addictive model has led foreign adversary governments to create “inauthentic,” or fake, accounts on platforms like Facebook, Twitter, and Instagram, where trolls operating in St. Petersburg, Russia, sought to reignite tensions in Baltimore over the death of Freddie Gray, and Iranian government agents have sought to influence the 2020 US elections.

The problems aren’t limited to foreign adversaries or to electoral politics. Facebook’s internal research concluded that nearly two-thirds of people who joined extremist groups on the platform did so because Facebook’s algorithms recommended the content. The result: Online extremism is spilling off the screen and into the streets, as QAnon believers get elected to office and militia groups use Facebook to plot the kidnapping of state governors. The spread of hate speech has become so endemic that it led to a Faceboook advertising boycott over the summer, but so far pressure from advertisers and governments has been insufficient to bring about lasting change. Meanwhile, online disinformation in anti-vaxxer groups has spun off into an epidemic of COVID-denial “truthers” and baseless conspiracy theories alleging that everything from 5G wireless signals to Bill Gates caused COVID, and urging people not to get vaccines.

There are really heartbreaking stories about families torn apart as one family member sinks into these conspiracy theories and their lives start to revolve around watching an endless string of baseless YouTube videos. In addition to the heavy toll on individuals and families, there are real consequences to society as a whole from this disinformation. What lies at the heart of how quickly and effectively it spreads is the detailed personal information that platforms have that allow us to be directly targeted for the messages that—based on our interests, personality types, and so forth—are most likely to resonate with each of us.

In a recent Wall Street Journal article, Karen Renaud writes that companies shouldn’t try to scare their employees about cybersecurity “because fear can leave employees in a constant state of anxiety, which makes them unable to think clearly about threats.” Is this your belief? How do you counsel employers to inspire their employees?

I was so disappointed to read that quote, as I think it misses the boat. Cybersecurity education is vitally important, both in our personal lives and in the context of where we work and go to school. When employers remind staff to lock up their desks when they leave, to lock the safe to protect the day’s cash receipts, to lock the front door after the business is closed and after they’ve left for the day, and to be careful walking to the bus stop or to their cars after dark, no one argues that these physical security reminders are a bad thing. No one says that workers will be too scared to do their jobs if you remind them unlocked doors and safes and unattended cash increase the risk of burglary and theft, or that employees will be too scared to come to work if their employers remind them to be careful when they leave.

Cybersecurity is the same in important respects: when workers are aware of what the risks are, and of the simple and practical steps they can take to reduce those risks, they are among the most important lines of defense that any employer has against cyberattacks. Cybersecurity preparedness really isn’t an IT problem, it’s a challenge that involves people, technology, and processes. Every cybersecurity professional knows that the very best technology can’t prevent human error. That’s why training is so important. With that in mind, the cyber risk prevention training that employers provide should be practical, concrete, and focused on tangible ways to reduce risk—and it should demystify what the threat landscape really is. The news appropriately focuses on massive cyber attacks and risks from nation-states; workforce training can show the ways that, even if it can’t defeat the most sophisticated and exotic threats, staff awareness and prudence actions can help stave off a great many incidents.

You write extensively about government surveillance and note that here in the USA, we’re luckier than most countries because of our laws and transparency. But what about the super secret stuff? How can we be certain that our law enforcement and intelligence agencies aren’t out of control with spying on those it may consider dangerous to society?

Over the past few years, public trust has really been eroded, and it’s vitally important that we continue to bolster government transparency and that we restore—and in some cases establish—oversight mechanisms that people can have confidence in. At the federal level, baseless allegations about the “deep state” and the politicization of intelligence have been incredibly damaging, making it harder for the existing oversight mechanisms to work properly. At the local level, many state and local police departments are moving forward with new technologies—like facial recognition in street cameras, drone surveillance for monitoring protests, and access to video feeds from people’s home doorbell cameras—without having clearly established guidelines for how they’re going to manage the privacy impact and use those technologies responsibly. As vital as government transparency is, in law enforcement and national security, there will always be some information that can’t be released directly to the public—but that makes it all the more important for there to be really effective and robust oversight mechanisms at the local, state, and federal level. These can include legislative oversight committees, appointed oversight boards, inspectors general, and other independent offices that are properly cleared for all the relevant information, whether it’s classified or related to an ongoing law enforcement investigation, etc., so that the public can have trust that the people who are charged with overseeing government actions are doing an effective job. There’s a lot of work to do on that front in 2021 and beyond.

Regarding GPS-enabled spyware, in Chapter 13, you write, “The statistics are alarming: according to the UK-based domestic violence prevention organization Women’s Aid, nearly a third of all women who have experienced domestic violence have also discovered that their abusers installed GPS tracking services or other spyware on their mobile phones. NPR has found that number to be even higher, with 85 percent of domestic violence shelters they surveyed in the United States reporting that they are working with victims who have experienced some form of online surveillance and stalking.” Can you offer us some good news on this front? Are the authorities working quickly enough to help these vulnerable women?

Authorities of all kinds have been disappointingly slow to recognize these risks and take action to address them. There are so many approaches that could chip away at these harms, but they’re underutilized right now. For example, legislatures could pass laws creating new restrictions against the use of stalkerware and enhance criminal and civil penalties when it’s used by a party in divorce, child custody, or domestic violence cases. Jury instructions and judicial guidelines could be revised so that using stalkerware is treated as an aggravating factor that can lead to increased penalties in domestic violence cases. Police departments could update training for officers and victim advocates to include increased awareness of these technologies and how they are used against victims. With this training police would be better able to investigate domestic violence cases, and victim advocates would be able to better advise the people, mainly women, who are the targets of this tech-enabled stalking. Government agencies that provide grant funding for organizations that counter domestic violence could steer some of that grant money to public education for potential victims, and to sponsoring work by app developers to create new stalkerware detection apps or to create security services that help prevent stalkerware from being downloaded and used.

The good news is that there’s been increased attention on the gendered nature of other kinds of cyber harm, especially online harassment and revenge porn. Because stalkerware and revenge porn often stem from similar motivations and cause similar harms, the work being done to increase access to justice for revenge porn victims may result in increased awareness and action on stalkerware as well.

Barbara Hodge

Load Next Article