Digital Ethics and Privacy Technology: How to Ethically Manage Data

The collection and analysis of personal data undeniably benefits both consumers and the greater social good. From governments’ detection of potential terrorist activities to supermarkets’ ability to keep popular items in stock, big data applications use forecasting and predictions to effectively solve problems.

As with most technologies, though, solving one problem creates a host of others. In the case of big data collection and analysis, one of the most serious problems is potential violations of data ethics (PDF, 3 MB). Data ethics refers to the use of data in accordance with the wishes of the people whose data is being collected.

Organizations are facing growing pressure to handle consumer data responsibly and transparently. As such, they need to attend to questions of data usage, digital ethics, and privacy technology. Indeed, organizations should not only understand the ethical issues behind data collection and the current regulatory environment—they should proactively implement a plan and practice of data ethics.

 

Ethical Data Usage in an Era of Digital Technology and Regulation

The debate over ethical data harvesting and usage has occupied the public consciousness as of late, thanks largely to three factors. The first is the advances in artificial intelligence and network technology that have fostered an astronomical growth in digital interconnection and data-producing devices. From fitness trackers to city infrastructure, the use of “smart” networked devices—otherwise known as the Internet of Things (IoT)—has grown exponentially in the past decade. A report from the Congressional Research Service estimates that the number of active IoT devices will increase to 21.5 billion in 2025.

The second factor is the increasingly normalized collection of data from online activity. Users generate data when they shop, use search engines, or interact on social media. Data can be collected ethically, as when consumers willingly submit their information to retailers. However, most of the time, third parties without a direct relation to users collect online data through cookies or other sources. This is an ethically questionable practice.

Finally, another controversial aspect of digital ethics concerns social media regulation. Social media companies such as Facebook and Twitter have recently made decisions to ban what they deem harmful content. Various factions debate whether such moves are proactive or chilling to free speech. These bans revive wider concerns around social media companies’ approach to digital ethics, regulation, and data privacy.

Primary Concerns about Ethical Data Usage

A 2019 Pew Research report suggests that nearly eight in ten US consumers assume their data is being tracked by advertisers, technology firms, and other companies. Most of those consumers also worry about data privacy and how their personal data is used.

They are arguably right to be concerned—companies frequently sell data they collect to multiple third parties, like marketers who use the data to tailor ads toward certain demographics. But this data can also be used for more nefarious purposes. According to a study (PDF, 1 MB) by faculty at the University of California at Berkeley, algorithmic bias led lenders to charge otherwise equivalent Latinx and African American borrowers higher mortgage interest rates.

Another major concern is data security, evidenced by the dozens of high-profile personal data breaches in the past decade. According to IBM’s Cost of a Data Breach Report 2021, data breach costs rose from $3.86 million to $4.24 million in 2021, the highest average total cost in seventeen years.

Current Regulations on Ethical Data Usage and Their Limitations

Data privacy laws govern some aspects of data collection and data protection. However, regulation is tricky. Though corporations are often multinational, data privacy laws are confined to individual countries (or at best collections of countries, like the EU). If a company is physically located in the US but also operates in China and Europe, it’s unclear which laws apply. This makes it difficult to assess the regulatory environment as a whole.

Some laws err on the side of protecting the citizen/consumer. For instance, under the European Union’s General Data Protection Regulation (GDPR), companies must gain an individual’s explicit consent to collect their data for each purpose data is used for. Data subjects may also withdraw their consent at any time.

By contrast, the US has a patchwork set of federal and state laws governing data privacy. These regulate the use of either certain types of data (e.g., credit data) or data for certain populations (e.g., children). But federal privacy laws do not regulate the vast majority of data companies collect. The major exception is a recent California law called the California Consumer Privacy Act (CCPA), modeled after the stringent data privacy laws of the GDPR. Under the CCPA, companies can be fined for violating individual rights provisions and sued for data breaches.

Elsewhere in the world, the data privacy regulatory landscape is in a similar state of upheaval. Given disparate privacy laws and historical and cultural differences between countries, a unified approach is unlikely. However, most countries do share some key data protection elements. These include restrictions on cross-border transfers for personal data, notification in the event of a data breach, and individual access and correction rights.

 

Learn more in our course program: Protecting Privacy in the Digital Age

Access the courses

 

How to Address Privacy Technology and Digital Ethics Questions

The debate over digital ethics and privacy technology falls along clear lines. On one side are the consumers and citizens whose data can be collected, analyzed, and potentially compromised. On the other side are the governments and corporations that collect the data, often using artificial intelligence technologies to do so. This section addresses the role digital ethics might play in these organizations.

Digital Ethics Questions for Companies

Even in the absence of explicit regulations on data collection, companies can still be proactive in addressing questions of digital ethics and data privacy. The social media platforms mentioned earlier are examples of companies that have sought to implement measures to address these ethical challenges, however controversial their specific approach may be.

Different industries collect and use digital data differently, and requirements can change based on data sensitivities and classes of data. But regardless of how and why the data is collected, companies can start by considering the following questions on digital ethics and data privacy:

  • How is data currently collected, stored, and used in the organization?
  • Is the current data policy compliant with model privacy laws like GDPR or CCPA?
  • What is the organization’s current privacy policy, and how is it communicated to various stakeholders?
  • How might policies and procedures be changed to provide customers with agency and choice in stating their preferences about privacy?
  • What opportunities are there to embed privacy considerations into future products and services?

A thorough reckoning with questions regarding data collection, storage, and use is the first step in creating a sustainable and ethical data policy.

Digital Ethics Questions for Governments and Regulatory Bodies

Governments play perhaps the most overarching role in addressing questions of digital ethics and data privacy. Currently, though, regulations differ from country to country, often because of clashing philosophies on digital ethics.

Data ethics frameworks must be implemented by a process of governance that can change over time in response to changing circumstances. For instance, a governing body might determine which pieces of a medical record should be private and which must be public. Over time, these regulations should be reassessed and changed when necessary.

The Role of Artificial Intelligence in Questions of Digital Ethics

Since advances to artificial intelligence have brought questions of digital ethics and data privacy to the fore, the role of AI technology is increasingly relevant.

The most significant AI issue is the widespread use of machine learning in data collection, particularly the use of computers to train algorithms using large data sets. As many have acknowledged, the process by which machines train machines has the potential to reintroduce or reinforce biases in data sets. Governments, technology watchdogs, and other organizations are currently making efforts to correct these issues of discrimination by machine.

 

How to Adopt Digital Ethics into Your Technology

As uses of data and artificial intelligence continue to grow, understanding the core components of digital ethics in relation to this new technology becomes ever more important. Understanding digital ethics and data privacy helps ensure that technological innovations don’t outstrip civil liberties and that governments and companies maintain the trust of consumers and citizens. For businesses, unethical uses of data, whether in terms of reinforcing bias and discrimination or exposing consumer data in data breaches, ultimately affects the bottom line.

Technological Solutions for Digital Ethics

For the ethical use of AI, organizations must develop and implement measures specific to the collection, storage, and use of data. One solution is to create a “data trust,” which serves as a fiduciary for data providers and governs the data’s proper use. Another is to randomize or aggregate identifying information of data subjects to make it impossible to associate data with specific individuals. This would address one of the main ethical concerns of data collection. And a more strategic use of AI systems—through what Brian Uzzi calls “blind taste tests”—can help reduce bias in AI.

More generally, engineers can adopt the philosophy of privacy by design for products and services. Privacy by design takes a page from universal design, which holds that buildings, environments, and products should be designed to provide access to everyone. Likewise, in privacy by design all products and services are private and restricted by default, until the individual owner changes the permissions.

What to Be Aware of before Incorporating Digital Ethics into Technology

As Reid Blackman points out in the Harvard Business Review, “AI doesn’t just scale solutions—it also scales risk.” In other words, the more extensive the efforts at data collection, the bigger the risk of introducing discrimination or data breaches. Organizations that ignore the question of digital ethics and data privacy or treat it on an ad hoc basis risk wasting resources. They also can create inefficiencies in product development and marketing, which ultimately affects profits. The best solution is to create clear protocols and a systematic, comprehensive plan to mitigate risk that operationalizes AI ethics.

 

The Future of Digital Ethics and Privacy Technology

A broad awareness and thoughtful application of digital ethics will be critical for a sustainable future. Artificial intelligence must be applied correctly to data collection and analysis if it is to help resolve our most urgent problems and preserve civil liberties. A recent study in Nature showed how artificial intelligence technologies could either facilitate or inhibit the achievement of the UN’s 2030 Agenda for Sustainable Development. But more generally, a failure to develop a broad program of digital ethics will mean continuing mistrust and suspicion on the part of citizens toward government and business.

Intelligent control presents one intriguing possibility for the future of digital ethics and privacy technology. One recent paper describes how an ethical reasoning architecture generated its own data for learning moral rules and thus reduced the possibility of human bias.

Many citizen advocacy organizations are actively lobbying to support digital ethics and privacy technology. For instance, the ACLU has a comprehensive plan in place to address violations of privacy related to new technologies, including mass surveillance, workplace privacy violations, and medical and genetic privacy. Among other things, the ACLU is working to require warrants for law enforcement access to electronic information, to unveil government surveillance practices, and to promote technologies that create privacy protection.

However, in the end, digital ethics and data privacy is everybody’s business. From C-suite executives to legislators to engineers, leaders across a broad range of organizations and industries have a stake in creating sustainable, comprehensive policies of digital ethics.

Interested in joining IEEE Digital Privacy? IEEE Digital Privacy is an IEEE-wide effort dedicated to champion the digital privacy needs of the individuals. This initiative strives to bring the voice of technologists to the digital privacy discussion and solutions, incorporating a holistic approach to address privacy that also includes economic, legal, and social perspectives. Join the IEEE Digital Privacy Community to stay involved with the initiative program activities and connect with others in the field.

 

Learn more in our course program: Protecting Privacy in the Digital Age

Access the courses