Skip to main content

· 15 min read
Sean Radel

Abstract

This paper explores the history, implementation, and ethical challenges of adopting Internet of Things (IoT) devices. IoT devices are an emerging market with an increasing level of risk due to their invasive ability to record and track users. This paper provides an analysis of existing regulations, use cases, cybersecurity threats, and privacy risks. Finally, the paper analyzes the ethical practices that can greatly help or hurt consumer trust in IoT.

Introduction

The paper’s main research question is, “What are the primary ethical challenges of adapting Internet of Things (IoT) devices regarding privacy, security, environment, and consent.” The main goals of the paper are to demonstrate the application and rationale of IoT device adaptation and the privacy and security hurdles that must be overcome for regulatory compliance and ethics. The paper aims to provide an overview of the IoT device industry that includes its history, significant products and firms, and its uses across various domains. The paper covers privacy regulation, data collection, and the security implications of such devices. Finally, the paper analyzes ethical considerations and the need for greater IoT device regulation. The comprehensive analysis of IoT devices and the privacy, security, and ethical concerns is relevant because of the increasing number of IoT devices. IoT devices are becoming increasingly relevant in our society, with the number of connected devices growing by 18% to 14.3 Billion globally in 2022 1. As the adoption of IoT devices increases, their ability to peer into our private lives increases. The devices are capable of collecting sensitive information like location and healthcare data, which were especially exacerbated during the height of the Covid-19 pandemic 2. When individuals purchase IoT devices, they are often also consenting to privacy policies for the related device or service, which allows the company to use personally identifiable data for their own purposes or transfer it to third parties 2. Finally, this paper is important because adopters of these technologies may not always be fully informed of what they are consenting to, businesses may not always be handling the collected data ethically, and incidental users of IoT devices may not have consent over how their data is collected.

Background

1. What Is “Internet of Things?”

Internet of Things devices are physical objects that utilize sensors and software to collect and exchange data with other devices 6. IoT devices can be found around a home or industrial setting. Particularly popular home devices are Amazon Alexa 7, or smart home security cameras.

2. History of Internet of Things Devices

The concept of Internet of Things was first created in the early 1980s when a Carnegie Mellon University graduate student modified a Coca-Cola vending machine to track the status of it’s inventory 3. In 1990 John Romkey created a toaster that was controlled by the internet, but it wasn’t until 1999 when Kevin Ashton of Massachusettes Institute of Technology proposed tracking supply chain items using radio-frequency identification (RFID) chips and coined the term “Internet of Things” 3. In the year 2000, LG announced the first smart refrigerator, and in 2009, FitBit launched its wrist-worn fitness tracker 3. In 2011, Google released Google Nest, which is used for remote control of home HVAC systems 4. In 2016, the Mirai botnet became the first major IoT cyber attack that leveraged hacked smart home network devices to commit denial of service attacks 35. In 2020, health tracking with IoT devices expanded due to Covid-19, and the privacy risks grew stronger 3.

3. What is Personal Data and Privacy Regulation?

Personal data, as defined by the European Union, is any information that relates to an identified or identifiable living individual 8. This data includes and is not limited to, a name and surname, contact information, data held by a hospital or doctor, voice captures, surveillance footage, location data, and more. Personal data is protected under regulations such as the General Data Protection Regulation (GDPR) 9 and the California Consumer Privacy Act (CCPA) 10. GDPR is applicable to businesses that process or control the personal data of European Union citizens, while CCPA is applicable to for-profit businesses that do business in California, with some other stipulations, that process or control California citizens’ data 910. Privacy regulations expand the rights of users to include consent, fair deletion, and retention periods. These regulations are important in the context of IoT because of the scale of collection IoT devices are capable of.

4. Computer Ethics

Computer ethics are the principles that computer scientists follow to provide safe, fair, and equal access to computing resources and services 11. Computer ethics are not always based on laws and regulations, but it may be ethical to be compliant. Following ethical “soft laws” can increase business value through consumer trust. It is ethical, and corporations have a duty to follow regulations like GDPR to protect consumers 12. Ethics must guide software decisions because of issues like data retention, poor security, and protection of personally identifiable information (PII) 13. Engineers need to understand and manage risk as they develop their products to avoid catastrophic disasters 14.

Applications of IoT Devices

1. IoT Devices in Healthcare

IoT devices have an existing and growing presence in healthcare. The devices can be used in a range of healthcare applications, including the patient, the physician, the hospital, and even in healthcare insurance 15. Specifically, IoT can monitor patient biometrics or blood pressure and cardiac control 15. IoT devices found even more purpose with the Covid-19 pandemic. Remote healthcare workers are able to monitor patients' heart rates, blood pressure, and blood glucose to ensure they are safe without risking the spread of Covid 15. While these applications are helpful, they increase risk because of possible HIPPA violations.

2. Home Assistant Devices

The most notable home assistant devices are Amazon Echo and Google Home16[17]. These smart home devices listen to user-prompted commands and can play music, find information from the internet, communicate with contacts, and more. Privacy-conscious consumers have chosen not to adopt these technologies because of fears they could be misused and that the companies could break their terms of service agreements. 7

3. Wearable Technologies

Wearable IoT devices started with health insights from Fitbit and have now expanded to communication, biometrics, and small apps on Apple Watch Series 9 18[19]. Meta has partnered with Rayban to create smart glasses that not only have camera and audio capabilities but live streaming, too [20].

4. Home Security

IoT devices can be used to bolster home security with SimpliSafe and Ring products [21][22]. Ring produces IoT cameras, doorbells, and security systems and has expanded to pet collars that allow you to remotely track and communicate with your pet [22]. SimpliSafe offers a suite of security systems as well as a security monitoring service where agents can alert authorities if there are risks [21].

IoT Privacy and Security

1. Consent and Incidental Users

IoT devices, specifically smart home systems like Amazon Echo and Google Home, can provide extra utility to a household, like checking the weather or setting a timer, but not all users of a household are required to consent to these systems 7. Many surveyed users mentioned that after setting up their smart home system, bystanders who may not be aware of or understand how the smart speakers work become secondary users of their devices 7. This is problematic because these users may be too young to consent to the data collection of the devices. Lau’s research showed that five (of 34 participants) users’ children use their smart system 7.

2. Overcollection of Information

Lau’s work showed that adopters of smart speakers tried to place their device in a location where it could best hear users, but privacy-conscious non-adopters cited that the idea of the device always listening turned them away 7. This is important because while privacy-concerned users don’t want to be heard, users who are not concerned want to be heard as much as possible. Amazon claims that their smart home devices are not always listening and that voice data is only stored in the cloud if the “wake word” Alexa is heard [23]. Furthermore, participants in Lau’s study, despite having “nothing to hide,” strongly valued their right to privacy 7[27].

3. IoT Device Vulnerabilities and Security

The MITRE Corporation maintains a catalog of Common Vulnerability and Exposures (CVEs) [24]. When querying this catalog for the keyword “IoT,” 1210 CVEs are returned to the user. IoT software vulnerabilities could allow malicious actors access to your home network and potentially to your personal data. The Mirai Botnet was the first notable IoT malware that remotely controlled a hundred thousand devices to execute code for the purpose of denying service to the domain registration services provider, Dyn 5,7. IoT devices, particularly due to their rapid growth, have security challenges like a lack of standards, a lack of regular patching and maintenance, and a lack of strong encryption [29]. Users who adopt IoT devices and are not security-minded may leave themselves vulnerable to attack via weak passwords and authentication if they do not update default credentials and configurations.

4. Do we need greater IoT device regulations?

As IoT device adoption becomes increasingly more common, regulation needs to be followed to ensure some level of standardization. On January 1, 2020, two laws were enacted that mandated unique credentials for IoT devices[29]. California’s IoT Security Regulation Law (SB-327) and Oregon’s IoT Security Regulation Law (HB-2395) are very similar but differ in that Oregon’s law applies primarily to consumer devices, while California’s applies to businesses as well [29]. Multiple other regulations have been proposed, but they have not been passed. The following regulations are as follows[29]:

-Federal Cyber Shield Act (S-2020) - US Senate 2017:

If passed, this bill would have required the Department of Commerce to establish a Cyber Shield Advisory Committee to recommend the format and content of Cyber Shield labels for consumer IoT devices and introduce compliance standards for cyber security [29].

-Protecting Privacy in Our Homes Act (S-2432) - US Senate 2019: If passed, this bill would require the Federal Trade Commission to introduce regulations requiring manufacturers to give notice to consumers when internet-connected devices contain cameras or microphones [29].

-Automatic Listening Exploitation Act (HR-4048) - US House of Representatives 2019:

This bill would limit the use of any sound, speech, or video captured by a smart speaker or video doorbell and prohibit any kind of service without the express consent of the consumer [29]. - Internet of Things Cybersecurity Improvement Act of 2019 (S-734) - US Senate 2019: This bill would allow the federal government procurement powers to increase cybersecurity standards around the Internet of Things devices [29].

Ethics of IoT Devices

1. Ethical Challenges of IoT Devices

The internet of things poses significant ethical challenges across mutliple parties. IoT developers need to implement privacy by design principles to ensure they are proactive and preventative against privacy related threats [31]. As part of following privacy by design principles, IoT engineers must ensure their system has end-to-end security throughout the entirety of it’s lifecycle [31]. Engineers must respect user privacy by requiring consent before processing data. Adopters of IoT devices need to be cognizant of secondary users who may not be able to consent due to their age or understanding of the technology. Both parties must follow the Categorical Imperative when developing or deploying services that can be relatively invasive by nature 12.

2. Environmental Impact of IoT Devices

Due to the size of IoT devices, they are designed to collect and process data efficiently [32]. Furthermore, they should produce low amounts of e-waste [32]. Smart climate control with IoT devices can improve efficiency in home or industrial HVAC. Smart thermostats can assist in energy reduction by not heating or cooling when it is not necessary [33]. IoT also has important use cases in environmental monitoring that can provide trend data of air quality and climate [34]. With low e-waste and power usage, IoT devices can be an important tool that can lower emissions and provide critical insights.

3. Digital Nudging and IoT Devices

IoT devices are becoming increasingly popular for tech companies to release alongside their suite of products. Apple is nudging consumers to adopt Apple Watches by including the Watch app by default on iPhone. Furthermore, Ring advertises packages that include other sensors and lists Alexa compatibility, nudging consumers to accept more invasive devices that collect different types of data than what they were originally interested in [22][35]. As the market continues to grow, consumers are increasingly pressured to adopt more and more devices that have the ability to create a complete data profile of them.

4. Pros and Cons of IoT Devices

The internet of things can provide significant utility to homes and businesses with its tracking, sensing, and listening capabilities. Virtual assistants can provide insights on demand, and cameras can alert users of threats at their homes or just when a package is delivered. Unfortunately, these benefits don’t come without risk, and the major con is that breaches and unethical actions do take place. In 2019, Apple Watch contractors were caught regularly listening to confidential conversations through Siri voice data [26]. Another major con is the possibility of cybersecurity attacks. If operated ethically, IoT devices provide attractive features, but with unethical actors, those cybersecurity and privacy risks will be realized.

5. Business Case for IoT Device Ethics

here is a strong business case for ethics in IoT devices. Lau’s research found that privacy concerned individuals didn’t want to adopt IoT devices because businesses may act unethically and break their terms of service [7]. If businesses were to increase their trust by following ethical soft laws, they could win over concerned non-adopters.

Conclusion

The rapid growth of IoT devices since 1999, when Kevin Ashton coined the term “Internet of Things,” has placed privacy-invasive in the homes of millions [3]. As these devices continue to grow more popular, the adherence to regulations like GDPR and CCPA is paramount to protecting consumers from unethical actors [9] [10]. To further protect users from the rapidly growing list of IoT related CVEs, cybersecurity standards must be taken seriously, and privacy by design standards should be at the forefront of the engineering process.

IoT devices are becoming more popular in a wide range of domains, from home assistants with thermostat control to supply chain tracking, and with Covid-19 assistance, ethics and acting with moral duty are important to increasing trust in the technology. Consent has never been more important with biometrics and confidential voice recordings at risk. Conclusively, industry leaders need to follow ethical soft laws to protect their business and foster consumer trust.

References

  1. State of IOT 2023: Number of Connected IOT Devices Growing 16% to 16.7 Billion Globally IoT Analytics, 3 Aug. 2023, source
  2. Elvy, S. (2022, February 9). Data Privacy and the internet of things. unesco. source
  3. World Economic Forum. (2020, December). The State of the Connected World 2020. source
  4. Marchant, Natalie, 2021, (March 31). What is the Internet of Things? source
  5. Gamblin, Jerry, (2016, October) Mirai-Source-Code, Github source
  6. Oracle, What is IoT? source
  7. Lau, Josephine, Benjamin Zimmerman, and Florian Schaub. "Alexa, are you listening? Privacy perceptions, concerns and privacy-seeking behaviors with smart speakers." Proceedings of the ACM on human-computer interaction 2.CSCW (2018): 1-31.
  8. “What Is Personal Data?” European Commission, source: commission.europa.eu/law/law-topic/data-protection/reform/what-personal-data_en. Accessed 6 Nov. 2023.
  9. Official Legal Text. General Data Protection Regulation (GDPR). (2022, September 27). source
  10. California Consumer Privacy Act (CCPA). State of California - Department of Justice - Office of the Attorney General. (2023, May 10). source
  11. Radel, Sean, (2023, September, 10) “Blog 1, Ethics”, source
  12. Johnson, Robert and Adam Cureton, "Kant’s Moral Philosophy", The Stanford Encyclopedia of Philosophy (Fall 2022 Edition), Edward N. Zalta & Uri Nodelman (eds.),source.
  13. Lawton, George. “5 Examples of Ethical Issues in Software Development: TechTarget.” Software Quality, TechTarget, 22 Dec. 2020, source.
  14. Lynch, William & Kline, Ronald. (2000). Engineering Practice and Engineering Ethics. Science, Technology, and Human Values, v.25, 195-225 (2000). 25. 10.1177/016224390002500203.
  15. Mukati N, Namdev N, Dilip R, Hemalatha N, Dhiman V, Sahu B. Healthcare Assistance to COVID-19 Patient using Internet of Things (IoT) Enabled Technologies. Mater Today Proc. 2023;80:3777-3781. doi: 10.1016/j.matpr.2021.07.379. Epub 2021 Jul 24. PMID: 34336599; PMCID: PMC8302836.
  16. Google, 2023, What is Google Home, source
  17. Amazon, 2023, Alexa features, source
  18. Apple Watch, 2023, source
  19. FitBit, 2023, source
  20. Meta, (2023, Septemeber 27), Introducing the New Ray-Ban | Meta Smart Glasses, source
  21. SimpliSafe, 2023, source
  22. Ring, 2023, source
  23. source
  24. MITRE (2023, November) source
  25. MITRE, (2023, November) source
  26. CloudFlare, 2023, What is Mirai? source
  27. Solove, Daniel J., 'I've Got Nothing to Hide' and Other Misunderstandings of Privacy. San Diego Law Review, Vol. 44, p. 745, 2007, GWU Law School Public Law Research Paper No. 289, Available at SSRN: source
  28. Wachter, S. (2018). Normative challenges of identification in the internet of things: Privacy, profiling, discrimination, and the GDPR. Computer Law & Security Review, 34(3), 436–449. source
  29. KeyFactor, 2023, IoT Device Security + How to Get Started, source
  30. Azrour, Mourade & Irshad, Azeem & Chaganti, Rajasekhar. (2022). IoT and Smart Devices for Sustainable Environment. source
  31. Cavoukian, Ann (2011). "Privacy by Design" (PDF). Information and Privacy Commissioner.
  32. Belokrylov, Alexander, (2022, September 26)“The Environmental Impact Of IoT”, Forbes, source
  33. Anderson, Colleen, (2020, March 11) “How IoT Will Transform Heating Systems”, Contractor Mag, source
  34. Jones, Quinn, (2022, April 15), IoT-Based Environmental Monitoring: Types and Use Cases, DIGI, source
  35. Schneider, C., Weinmann, M., and vom Brocke, J. (2018). Digital Nudging–Guiding Choices by Using Interface Design, Communications of the ACM, 61(7), 67-73.
  36. Mozilla, (2022, November, 9) Apple Watch.source

· 4 min read
Sean Radel

Main characteristics and elements of the E.U.'s Artificial Intelligence Act (AIA)

The E.U.’s Artificial Intelligence Act (AIA) is a legal framework likely to be implemented in early 2024 that governs the sale and use of AI. The AIA is similar to GDPR, the Digital Services Act, and the Digital Markets Act in the fact that it regulates the digital economy. All AI systems that are “placed on the market, put into service or used in the EU” are subject to the regulation with the following three exceptions: AI systems for military and national security purposes, free and open source AI systems, and systems built for scientific research. (Hoffman, 2023) The regulation is based on risk categories: unacceptable, high, low, or minimal risk. (Wörsdörfer, 2023 ) and will prohibit the system if it poses an unacceptable risk. In the case of unacceptable use, the regulation wants to counter systems that are labeled manipulative, exploitive, or aimed at social control. Systems are considered high-risk if they are already subject to a different safety regulation (toys, medical devices) or if they fall into the following use cases: biometrics, critical infrastructure, education and vocational training, employment, workers management, and access to self-employment, access to essential services, law enforcement, migration, asylum, and border control management, administration of justice and democratic processes (Hoffman, 2023). High-risk systems are subject to a conformity assessment to ensure they meet all AIA standards as well as they must submit their service to an E.U. database that lists all high-risk AI services to the public (Wörsdörfer 2023). Low and minimal-risk systems have a transparency obligation to fulfill. User’s must be notified that they are interacting with artificial intelligence when the AI system has any of the following features: detecting emotions or determining associations with social categories based on biometric data, or generating and manipulating image, audio, or video content (Wörsdörfer 2023).

AIA's strengths and weaknesses from a computer ethics perspective

From a computer ethics perspective, AIA has good intentions but still shortcomings. AIA protects users from potentially harmful technologies and increases visibility of what high-risk systems they are opting in to. AIA’s position on unacceptable risk is good because it protects user’s from corporations using the technology for malicious purposes, but contradictory because there are exemptions for certain usages. The regulation does not prohibit the military or national security services from using AI for malicious purposes, so it may not really help at all. Even the exemption for research could be potentially dangerous and may require something akin to the IRB to dictate whether the AI will be truly ethical or not.

Possible reform measures that could help to strengthen the AIA

The first aspect of the AIA that needed reform was that developers determine their risk category. If the developer fails to label their risk category correctly, they are subject to a 20 million euro fine, or 4% of their global turnover. I believe that this is a very high fine, considering how high-level or abstract the current risk categories are. I think that it will be challenging for developers to understand the regulation at it’s early inception, and this could leave companies vulnerable. I think a possible change to the system is to fine the firm based on the effect of the mislabeling and create a structured process for evaluating risk levels. I think the fine should scale with the severity of the violation. The risk system is the second aspect of AIA that I think should face reforms. I think that with the rapid development of AI, developers may try to work around the high-risk labeling system and create systems that are functionally high-risk but legally not. Finally, something that is still unclear to me with AIA and GDPR is if AI models are trained on data that violates GDPR, what happens to the model? I think we are entering a foggy legal territory, but personally, I think AI trained on data that violates GDPR should be at an unacceptable level of risk and banned.

References:

Hoffman, S. (2023, September 26), The EU AI Act: A Primer, CSET Georgetown https://cset.georgetown.edu/article/the-eu-ai-act-a-primer/ Wörsdörfer, Manuel, The E.U.’s Artificial Intelligence Act: An Ordoliberal Assessment (August 17, 2023). Forthcoming in: AI and Ethics, Available at SSRN: https://ssrn.com/abstract=4544276 or http://dx.doi.org/10.2139/ssrn.4544276 Regulation of the European Parliament and of the council ... - eur-lex. (n.d.). https://eur-lex.europa.eu/resource.html?uri=cellar:e0649735-a372-11eb-9585-01aa75ed71a1.0001.02/DOC_1&format=PDF

· 4 min read
Sean Radel

Introduction

The issue of technology changing the job market is not nearly a new one, as Frey and Osborne cite William Lee’s situation with the invention of the stocking frame knitting machine in 1589 (Frey, 2023, 6). I would loosely consider myself a proponent of automation leading to job creation and completely a proponent of utilizing automation and technology to create better, more sustainable jobs. However, I agree with the Queen's reasoning for not granting William Lee his technology patent. Keeping her subjects employed may have helped the short-term stability of the kingdom, but long-term technological advancements should undoubtedly be seen as a benefit. When the sewing machine was first patented in 1846, the mass production of garments allowed housewives and seamstresses to find employment at factories. These jobs created by technological advancement lowered the prices of garments, which increased their affordability and household income for those who chose to work in the factories (Stocks, 2021). Alternatively, Foxconn of China plans to automate 60K to 110k employees out of a job by implementing “FoxBots” (Woersdoerfer, 2023, 18). This is a case of mass unemployment due to automation, but is it really such a bad thing? The New York Times reports conditions at Foxconn facilities do not meet Amazon’s supplier code of conduct, waves of employee suicides, child labor, and unsafe working conditions (Condliffe, 2018). While an argument can be made that replacing these employees with robots could remove ethical violations, the point of discussion is that these factory jobs may be very automatable because they are generally repetitive and follow explicit instructions. Still, even when they don’t, computerization is improving at solving non-routine tasks (Frey, 2023, 15).

Robot-Proof Jobs

While computerization may be changing the labor market, I think that many fields can avoid a complete computer takeover. Firstly, while individual jobs may be at risk, most industries will not disappear from humans. Consider farming and construction, which have technologies like tractors and excavators to help with laboring. The laborers who may have been in the field sowing seeds have been replaced by “robots”, but the farmer managing the property is completely robot-proof and likely benefiting economically from the change. I think that managers are certainly more robot-proof than the individuals below them in a corporate structure. Rather than managing just people, they may manage equipment and people who maintain equipment. As we saw in the slides, social jobs are likely safe, so I would consider most service industry and leadership jobs to be generally robot-proof (Woersdoerfer, 2023, 21). While some jobs are more robot-proof than others, I don’t think any are immune to change due to technology. I think that jobs in the creative art industry may become even more valuable because the competition of AI-generate art will push people to produce better work, and consumers may want “real” art. Finally, while an AI revolution may come for some cognitive and labor-intensive jobs, I believe that education and training improvements can help workers adapt to the constantly changing job market.

Reforms

Technological revolutions can alleviate workers from poor conditions, like the possible implementation at Foxconn, but possibly leave thousands unemployed. The way forward is to minimize the cost of technical and public colleges to allow unemployed workers to make a career change. I would not argue for implementing UBI as a safety net because I think that the government should only provide an environment where citizens can earn a livelihood, not provide the livelihood itself. If cognitive laborers have sufficient access to education, they can robot-proof their own careers. Physical laborers are a harder group to help through reforms because you can be smarter than AI, but it is harder to be stronger than a tractor. Finally, advocating for a lifelong learning culture can help cognitive and physical laborers stay ahead of the technological revolution.

Sources:

  1. Frey, Carl Benedikt, and Michael A. Osborne. “The future of employment: How susceptible are jobs to computerisation?” Technological Forecasting and Social Change, vol. 114, 2017, pp. 254–280, source.
  2. Admin, Stocks. “How Did the Sewing Machine Impact the Industrial Revolution?” Industrial Embroidery Machines & Sewing Equipment Suppliers, 29 Mar. 2021, soource.
  3. AI and Labor Markets (Week 5) Slides, Manuel Woersdoerfer, 2023
  4. Condliffe, Jamie. “Foxconn Is under Scrutiny for Worker Conditions. It’s Not the First Time.” The New York Times, The New York Times, 11 June 2018, source

· 4 min read
Sean Radel

Computer ethics

Computer scientists follow computer ethics to provide safe, fair, and equal access to computing resources. Computer ethics are usually soft laws that should be followed to improve consumer trust. To expand on that, it would be unethical to collect data from users that could be used to harm them. This could come in the form of blackmail using leaked data or writing algorithms that implement the data in a predatory way to exploit the user. To expand on fairness and equality, users should not have to pay exorbitant amounts of money to access the software they need or be discriminated against when trying to access information or software.

Ethical Theories

Utilitarianism is an ethical theory based solely on consequence and outcome. Actions are only determined to be right or wrong if the outcome is better for most people. Utilitarianism evaluates decisions as pleasure versus pain and tries to quantify human suffering and success. Utilitarianism can be discriminatory toward minority groups because it ignores just or fair distribution (Savulescu, 2020).

In a business context, utilitarianism is my preference. That is because it is good to appeal to most people if you are trying to sell a product (mass appeal). Utilitarianism is the best in a political environment, but I would concede that some principles from other ethical theories are also fundamental. For example, distributive justice from virtue ethics, which rewards individual merit and worth. In a democratic society, I think that utilitarianism must be the best option because providing the best for the majority in a majority-rule society should lead to the retention of power. To improve utilitarianism, I would suggest substantial minority rights to account for negative externalities.

Business Case

Ethics must guide software professionals in their decisions because of issues like algorithmic bias, addictive app design, questionable data ownership, and poor security and protection of personally identifiable information (PII) (Lawton, 2020). Computer science professionals should create software with some intention and consequence in mind and understand the negative externalities that may arise. These issues are best solved by following Kantian ethics. With GDPR and CCPA in mind, when personal data is collected with duty in mind, businesses should collect minimal personal data and securely store and transfer sensitive information and PII. Aligned with Kantian ethics, following data privacy can make an actual difference on the ground.

Recently, fashion retailer Forever 21 suffered a data breach that exposed names, Social Security numbers, date of birth, bank account numbers, and information about Forever 21 health plans, including enrollment and premiums paid by over 500,000 individuals(Hope, 2023). On one end, if the malicious actors acted ethically, they would not have attacked their system. On the other end, if the data controllers acted with more significant duty, they could have possibly protected the users by defending against the attack. There is absolutely a business case for computer ethics, and I think data privacy is paramount to that. In the era of GDPR fines, it is economical to be ethical and follow the duty to adhere to the law. In May 2023, Meta was fined $1.3 Billion for failing to comply with GDPR (Satariano, 2023). Consumers lose trust when businesses act unethically (over-collect data, fail to make dutiful decisions). Personally speaking, I allow Meta only to collect the minimum required data to use their services. I won’t download TikTok because I have concerns about their trustworthiness regarding ethics and privacy (Gillies, 2022). In closing, companies may have more consumer trust if ethics are properly implemented in the computing industry, which could lead to a net business benefit.

References & Sources:

  1. Wörsdörfer, Manuel, Ethical Theories (Aristotelianism, Utilitarianism, Kantianism) (Supplemental Material) (Week 1)
  2. Savulescu J, Persson I, Wilkinson D. Utilitarianism and the pandemic. Bioethics. 2020 Jul;34(6):620-632. doi: 10.1111/bioe.12771. PMID: 32433782; PMCID: PMC7276855.
  3. Lee, Francis, et al. “Utilitarianism: Pros and Cons.” Phronesis, Eidenai OER, 1 July 2019, pressbooks.
  4. Lawton, George. “5 Examples of Ethical Issues in Software Development: TechTarget.” Software Quality, TechTarget, 22 Dec. 2020, www.techtarget.com/searchsoftwarequality/tip/5-examples-of-ethical-issues-in-software-development.
  5. Johnson, Robert, and Adam Cureton. “Kant’s Moral Philosophy.” Stanford Encyclopedia of Philosophy, Stanford University, 21 Jan. 2022, plato.stanford.edu/entries/kant-moral/.
  6. Hope, Alicia. “Data Breach at Apparel Giant Forever 21 Impacts over 500,000 Individuals.” CPO Magazine, 5 Sept. 2023, www.cpomagazine.com/cyber-security/data-breach-at-apparel-giant-forever-21-impacts-over-500000-individuals/.
  7. Satariano, Adam. “Meta Fined $1.3 Billion for Violating E.U. Data Privacy Rules.” The New York Times, The New York Times, 22 May 2023, www.nytimes.com/2023/05/22/business/meta-facebook-eu-privacy-fine.html.
  8. Gillies, Sierra. “TikTok’s Addictive and Unethical Algorithm.” Medium, SI 410: Ethics and Information Technology, 10 Mar. 2022, medium.com/si-410-ethics-and-information-technology/tiktoks-addictive-and-unethical-algorithm-3f44f41f1f3c.