Volume 2023, Number October (2023), Pages 1-5
Ubiquity Symposium: Digital Economy: Dark Innovations in the Digital Economy
Innovations are adoptions of new practice in communities. They are adopted because people perceive value from them. However, around the edges of the standard community practice, new practices can emerge that are of negative value. They are called dark innovations. Four examples are examined here: privacy, subscriptions, automation, and class abstraction. There is no easy way to eliminate dark innovations.
The patent system was set up to give inventors exclusive rights to their inventions for a limited time so that they could have a chance to get their inventions to market without the risk of expropriation by other parties. But a scourge to the system started to emerge around 1990: patent trolls. They are individuals or companies that acquire unused patents (for example, from bankrupt companies) and then file lawsuits against companies for infringement of those patents. Patent trolls filed more than 2900 cases in the U.S. in 2012, a six-fold increase from 2006. The trolls are not aiming to bring a defunct invention to market, but instead to extort money from other firms in drawn-out, complex patent litigation. The U.S. government has tried to curb the practice through legislation, with only modest success. Patent trolls are an example of a dark innovation—a new practice that does not bring a positive value to the community.
Dark innovations are appearing in the digital economy. I will discuss four: security and privacy, subscriptions, automation, and abstraction. Although there is reason to believe many people are concerned about them, there are no easy solutions. However, if we become aware of the problems, we may be able to do something about them.
Security and Privacy
When the basic protocols of the internet were designed in the 1970s, security and privacy were not major concerns. The early internet was a closed community of government-supported researchers who largely trusted each other. A few proposals to incorporate caller ID into the protocols were met with scorn by powerful pioneers who did not want the internet to become a surveillance tool for a Big Brother government. They put a high value on anonymity. Nothing like caller ID was designed into the internet protocols. As a result, no one can be sure who is attempting to connect to their server. Because anonymity was baked in and many who value it today, there is little interest in retrofitting a caller-ID feature to the internet.
Anonymity has enabled hackers and intruders to break into systems, spread malware, lock up data until a ransom is paid, sell stolen private data in the black markets, go phishing, send massive spam, and block servers by overloading them with fake requests. They are a growing scourge that is makes life on the internet increasingly difficult; this has prompted calls for the internet to develop an "immune system" that, like a human immune system, can hunt down malware, block hackers, and form antibodies against future threats. Unfortunately, no one knows how to do this. Instead, we have experienced a strong surge of system fortification. In addition to strong passwords, users must use multifactor authentication (MFA) that sends them a verification code to complete the login. Some authenticators use biometrics (such as the face recognizer on Apple iPhones) to speed up the authentication process.
MFA has blocked many intruders. But it has also made our smartphones into a single point of failure for our ability to access a massive system of interconnected computers and data servers. If something happens to my iPhone, where my MFA app lives, I cannot access many of my accounts. Because I cannot log in, I cannot contact the company's help desk. Given how central our smartphones have become to our daily lives, potentially, this is a serious threat. Think about how many things you take for granted but would not be able to do if your smartphone broke or was stolen. You would be locked out of your accounts without access to your data. I find this very troubling. I experience flashes of anxiety when I cannot find my phone or when I'm in a crowded place where thieves can snatch it at any moment. The providers of security services have not designed a Plan B backup system to maintain my access if my phone breaks. The practice of leaving us stranded without access is turning into a dark innovation.
Internet service providers have had to become very inventive to pay for the infrastructure to support their services. Software developers used to distribute software on disks so that you could download it to your computer. They would sell you an access key to unlock it after installation. But there was soon a black market for access keys. They discovered they could avoid this problem by going to a subscription model. The user pays for an annual subscription and sets up an account with the provider. The provider uses MFA to bind your app to your account so that you can use the app without giving a password every time. Your app regularly pings the server to verify that your subscription is still in force. This model has become so popular that it has acquired the generic idea called "Xaas" or "X as a service." I now have to maintain a large database of account names, passwords, and PINs for many services. I worry about a system crash or ransomware making that file inaccessible. Worse, I worry that if my subscription expires without my knowledge (easy to forget with so many), I suddenly have no access to a needed service and to my personal data maintained by that service. My access to my data—and indeed my identity—depends on me maintaining all these subscriptions. If I become seriously ill and cannot maintain these subscriptions, they will expire within a short time of my death. My identity, records, personal files, and all other digital objects I own will be lost forever. Historians of the future will have no access to files and records because they all vanished into bit-space after the deceased's subscriptions expired. The practice of maintaining our identities by subscription is another dark innovation.
Since World War II we have looked for ways to automate business processes with computer and storage devices. We have become better and better at this over the years. Since 2010 many new AI-powered apps have come online with universal connectivity and access to massive amounts of training data. These apps have enabled automation for many business processes. The automated process is typically faster and much cheaper than when human agents were involved. For example, in the 1990s many businesses started to centralize their customer help desks from local branches. They replaced local agents with call centers. Customers would contact the call center for help. Call center designers advertised that they could cut the costs of customer service by a factor of 10 or more. Companies outsourced customer service to call centers in other countries where labor costs were less. The idea of personalized service to deal with problems disappeared. In their relentless pursuit of ever-cheaper call centers, the designers began introducing AI in the form of voice-recognizing robots. If your question is one of a preprogrammed list, the robot can assist you. Otherwise, you often get put on a 10-minute hold by the robot who tells you that due to abnormally high customer loads, they cannot get to you right away. Many companies have now dispensed completely with any notion of talking with a live agent, and only allow you to send an email or request a chat session. A lot of automation has been a boon for companies and a bust for customers who need service.
This problem is spreading to many parts of organizations, not just customer service. In government offices, many managers are praised for totally automating their office, reducing the staff to a single caretaker, and seeking to handle all interactions with users (customers) through robot interactions. These automated systems are usually not well-designed. They take a long time to respond and make many mistakes. I know of one case where an office was automated, and its customers were told to make requests at least seven days in advance or if they needed a faster response to apply for it by submitting a justification form. These systems do not provide adequate backup human agents to deal with the breakdowns they cause. The practice of trapping people into limited spaces by unresponsive automated bureaucracies is a dark innovation.
Once I quipped that artificial stupidity in automated government systems is a bigger threat to humanity than superintelligent AI systems. Now I'm worried that it wasn't a quip.
The large-scale automated systems are being programmed to deal with classes of users rather than individuals. This is happening because data collection is so easy through our large, well-connected networks, and aggregation is easy with analytic software. It is easy to program a large-scale automatic system to deal with a relatively small number of classes of users rather than large numbers of individuals. Our individuality is being abstracted away into digital classes and machines are programmed to disregard our individuality. This feature of our technology has supported class identity politics, which have become so controversial. Many see it as a dark innovation.
These four dark innovations were not intended by the designers of systems but are nonetheless real. They are new practices that have emerged around the edges of other practices that we want. I find them worrisome, but I have no solutions. There are trolls under every digital bridge.
Peter J. Denning is Distinguished Professor of computer science at the Naval Postgraduate School in Monterey, California. He is a past president of ACM (1980–82). He received the IEEE Computer Pioneer Award in 2021. His most recent book is Computational Thinking (with Matti Tedre, MIT Press, 2019).
2023 Copyright held by the Owner/Author.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2023 ACM, Inc.