Volume 2012, Number January (2012), Pages 1-9
2011 marked the 50th anniversary of the first educational program on computer law, sponsored by the Joint Committee on Continuing Professional Education of the American Law Institute and the American Bar Association (ALI-ABA). In 1971 at an ACM conference, Roy Freed and six colleagues founded the Computer Law Association (CLA), an international bar association (renamed later as the International Technology Law Association).
Freed graduated from Yale Law School in 1940 and has practiced law for 46 years, first with the Department of Justice Antitrust Division, then with various private law firms. His article, "A Lawyer's Guide through the Computer Maze," published in the November 1960 issue of Practical Lawyer, was the first to alert the profession to the special issues created by computerization. Two years later, in "Legal Implications of Computer Use," published in the December issue of the Communications of the ACM, Freed called for interdisciplinary cooperation between the fields of engineering and law and argued that "Despite the fact that such cooperation might stem initially largely from the existence of problems, it offers opportunities for constructive contributions in both areas and for intellectual satisfactions from exposure to the workings and subject content of the two quite different professional fields."
Having been involved for so many years in both fields, Freed has recently applied his knowledge of law and computers to the study of the human mind. Last May, he talked at a joint meeting of the Boston Chapter of the IEEE Computer Society and the Greater Boston Chapter of the ACM about his life-long interest in the intellectual aspects of the law, specifically in regards to computers substituting for human activities and augmenting human intelligence. The following interview is based on that lecture and subsequent discussions Freed and I had in person and over email.
Ubiquity: What was your original idea of "computer law"?
Roy Freed: In October 1960, I attended the first (and last) national conference on "law and electronics," sponsored by UCLA. The participants were all talking about how computers could help in law practice, but I started to think about the substantive legal issues that were raised by the availability and use of computers. Computer law, as I saw it, would cover the whole gamut of legal issues, in all situations where computers were involved. Because computers were used as tools by people and substituted for human activity, there were implications for intellectual property, tax issues, banking law, health law, tort law involving accidents and intentional harm, and more.
But at the time, most contributors to the field of law and computers were interested in applying computers to existing processes in the practice of law, specifically to the time-consuming task of searching through documents. I was interested in the intellectual aspects of the law, and they were purely mechanistically interested in applying computers to assist lawyers in their work.
The initial legal problem was introducing computer records into evidence in courts. Lawyers and judges had no idea of the nature of these records and how they were created and replicated. In many situations, lawyers were withdrawing cases because they did not know how to introduce computer records into evidence and demonstrate that they had not been tampered with.
Ubiquity: And your interest in the intellectual aspects of the law led eventually to an interest in the mind?
RF: Fundamentally, laws should reflect facts. They should reflect the facts about what the mind, or living brain, is and how it operates. Lawyers apply laws routinely to the diverse results of the mind's operation, because legal issues arise from the mind's thinking processes, emotional impacts, and the way it controls our bodies. Now that the lines between what the mind does and what a computer does are blurring, lawyers need to understand the workings and limitations of computers and their inferential revelation of how the mind works. I simply seek to advance the discussion to a more fruitful factual arena.
Hence, we should update existing laws to reflect a more modern definition of the mindand a related more modern definition of information.
Your question stimulates me to reflect on the long hiatus between my understandable concentration on traditional legal issues while a working student at Yale Law School and my much-later opportunity to focus on the fundamental role of the mind in the legal system. While I was enthralled earlier by Jerome Frank's provocative jurisprudential observations in his pioneering classic entitled, The Law of the Modern Mind, which enlisted me to the concept of legal realism at Yale Law School in the 1930s, it ironically took me six decades into my more-relaxed retirement to learn, and explain, its source as then-overlooked inherent individual subjectivity. Ground-breaker Frank is credited with introducing the now little-discussed, but still very relevant, perception, "that the decisions made by judge and jury are determined to an enormous extent by powerful, concealed, and idiosyncratic psychological prejudices that these decision-makers bring to the courtroom."
Ubiquity: There are many definitions of "mind" floating around. Computer scientists distinguish between brain and mind; the brain is the biological organ, the machinery if you will, and the mind is a set of higher-level phenomena such as intention, creativity, free will, and awareness that somehow emerge from brain processes. How does this compare with the way "mind" is used in the law? How would you like to see "mind" defined?
RF: In this discussion, I'd like to use the term "living brain" as the same as "mind," meaning a dynamic animate machine broadly analogous to a computer. I prefer "mind" to connote an always-vital organ in contrast to the brain, which can be either living or dead. And I distinguish the mind from a computer for its uniquely possessing creativity, free will, imagination, personality, emotions, and the like. In contrast, computers are merely plodding logical predictable machines controlled by minds either directly or by programs.
I particularly object to the idea that the mind is merely the program of the brain. Conversely, I find support for preferring "mind" to "brain" in the fact that common expressions imply its inclusiveness and vitality. For example, it is appropriate to speak of a person acting mindlessly but not brainlessly, being out of one's mind but not out of one's brain, and keeping something in mind but not in one's brain.
As you, I believe that software is machine-like. But software's machine-like qualities differ radically with respect to the mind and computers, now that it is apparent that remembered mental instructions are human equivalents of programs. But being able to remember, and carry out, an instruction sequence in the mind does not make the mind into software. It simply says that the mind can behave like software when it wishes. That is the basic point of my explanation. Instructions within the mind effectuate the mind's equivocal emotions and subjectivity, while those in computers make them logical predictable obedient slaves.
Marvin Minsky once characterized the brain as "a machine made of meat." His analogy is incomplete because the meat must be alive to function as a brain. For me "mind" and "living brain" are identical, while "brain" alone is equivocal and subject to its context for its meaning.
Ubiquity: What's wrong with how lawyers think about information?
RF: We don't in the law have a good understanding of what information is. Our implicit assumption is that information is nebulous, certainly not physical. For example, under the law, there has to be a tangible impact in order to claim emotional damage. Therefore, a person claiming emotional harm from the words of another person cannot collect damages unless he or she shows a physical impact on their body. Since words seem to be pure information without physical impact, this is hard to establish. But I would argue that even though information is not tangible, it alters the physical structure of the mind. Just because we cannot see it, doesn't mean it's not physical.
Computer scientists recognize that information is physical. It is patterns of physical energy pulse signals that can exist within a mind or a computer. The more precise definition of information in computer science makes it clear that information affects the physical structure of the mind and has an impact on the health and behavior of its person.
Ubiquity: Is this a new interpretation, that mind patterns are physical and should be so recognized? How does this differ from present interpretations?
RF: To the best of my knowledge, this is a new and original interpretation for the law. My efforts should enable enterprising people in various professions to start assessing the legal implications of recognizing the mind, or living brain, to be a unique animate machine whose creativity and free will influence its operation. I cannot predict how my functional perception of information within the mind should be presented in statements of laws, because I haven't undertaken the massive exploratory effort to draft laws to integrate it. I hope that I can stimulate others to tackle that work.
As I noted earlier, the notion that all information is physical is clearly recognized in computer science. While I derived that understanding as a non-scientist by reasoning from the fact that it clearly is in computers and is reasonably inferable from them to the mind, Claude Shannon confirmed my conclusion with his statement, "Information is signals that are not noise." But non-scientists, especially lawyers, don't recognize that yet. They tend to assume implicitly that information is something that is nebulous or immaterial, if they think about it at all. I believe that that is a major fact for legal people to learn. I believe that accepting it can have major implications to the application of many laws.
Ubiquity: You argue that both the mind and the computer process information in the form of pulse signals. How does this shape your understanding of the mind?
RF: When I was lecturing to lawyers to introduce them to computer law, I explained how computers work by drawing a rough analogy from the human body, particularly the mind as the counterpart of the computer's basic central processing unit, with its arithmetic unit for performing logical operations and its control unit for prescribing their sequence, and the senses and extremities of the human body as counterparts to input-output devices. Now, I reverse this analogy by using computers as rough models of the mind to infer the existence, and nature, of its equivalent features. I came to the conclusion that the human mind, created by evolution, is a unique bio-psycho-social machine that processes coded discrete batches of electrochemical signals, and serves as the control center of the body. The mind follows internal instructions similar to computer programs, where our instincts, or drives, are analogous to an operating system and our mental "application programs" are being constantly updated by creativity and inputs from the environment. It ironically took the inventors of computers, who couldn't yet understand their own minds, to enable people, inadvertently, to better understand their own minds by drawing upon computers as models. But while the human mind is roughly analogous to a computer, it is far superior. Creativity and free will distinguish the unpredictable mind from fully logical predictable and replicable computers.
Ubiquity: What does it matter whether the mind is a machine or not, in order to resolve legal questions?
RF: I certainly believe that the mind, or living brain, is a machine. It is a unique evolution-created animate machine broadly analogous to a computer, but far superior due to its features of creativity, free will, and emotions. It is difficult to imagine, off hand, legal issues that might depend expressly upon whether or not the mind is a machine. But it is useful to know that it is a machine in order to understand that legal issues arise from its failure to operate properly.
Ubiquity: In the law, evidence is assumed to be observable; for example, it is often recorded in a tangible medium. Even though a mind pattern is physical, it is not directly observable. Does that create problems for that kind of evidence? Are there new technologies (like brain scanners) that will allow us to "see" mind patterns?
RF: The fact that mind patterns are not readily and reliably observable undermines much of the prevailing traditional effort to derive reliable direct evidence of human action, observation, and memory. That entails serious uncertainty in applying the law, especially in criminal matters, because the mind is the prime mover in the vast majority of legal situations. So far, brain imaging scanners appear to be feasible primarily in determining in which components of the brain specific activity occurs. However, the present lack of observable mental patterns signifying specific thoughts or intentions leaves imaging of little use to derive reliable evidence. I wonder if those scanners will ever reveal precisely how specific mental activity occurs.
Ubiquity: Well, if information is not observable in the mind, or living brain, and the technology is not close to making it observable, how does this interpretation help us understand the law or make better laws?
RF: Lets tackle the possible inconsistency. Information is physical and is stored in computers, where it is directly observable, and in the mind, where it is not yet directly observable and might, or might not, ever be. That is simply reality that must be accepted and accommodated. Neuroscientists and others are trying to capture normal electromagnetic emanations from the mind, or living brain, as a natural conductor, but without success so far. The observable records in computers are ordinarily admissible in evidence when properly qualified for their apparent accuracy or reliability; the unobservable records in minds are not, except to the extent that a brain image can be interpreted by an expert witness persuasively. Hence, mind, or living brain, records are not yet directly eligible as evidence, if they ever will be.
As an unavoidable compromise, evidence presumably about mind, or living brain, records is available in traditional ways. For example, witnesses write documents that reflect their mind activity and are deemed to be admissions against their interest, or testify orally about their recollections or decisions and are tested by cross-examination, or their behavior is characterized by so-called expert witnesses, such as psychologists and the like.
Ubiquity: Many people, especially futurists, talk about the coming "singularity" and predict that computers will eventually be indistinguishable from the human mind or even superior to it.
RF: The elusiveness of the precise operation of the mind and its incredible complexity will, in my view, prevent people from replicating it. Neuroscientists have greatly influenced my thinking about the mind with their discoveries of the presence of neuronal circuitry through which flow electrochemical signals. But neuroscientists know the wherethey can identify the components of the mind-but not the how. How the mind works precisely is going to remain a secret. We should be skeptical of the over-optimistic forecasts that computer engineers will be able to replicate the mind. Mere knowledge that those signals exist within the mind is far short of knowing their precise composition, coding for words and numbers, and operation, in thinking processes. Moreover, I would fear an inanimate machine with the power of the human mind, maybe without its ambiguities, but inevitably lacking its undoubtedly unique human values and judgment.
Ubiquity: How do you think a more updated understanding of the mind and the nature of information could influence the practice of the law?
RF: There are too many examples to detail now. But take, as a superficial example, the term "work of authorship." Section 101 of the Copyright Act enumerates eight categories, including literary works, musical works, dramatic works, pantomimes and choreographic works, pictorial, graphic, and sculptural works. Instead, given my understanding of the nature of information, the term "work of authorship" can be generalized to include any specially created set of input signals to a mind, or living brain, or a computer. Music is no longer a set of notes on a sheet; it is a set of signals. Art need is not oil on canvas or water on paper; it can be a graphic image generated by a set of signals.
Of course, this general functional approach would require a public effectively educated regarding both the operation of the mind as a counterpart to computers and the fact that all information is physical as various types of coded batches of signals and of light and sound waves. This should replace the common implicit assumption that information is nebulous and immaterial. Understanding the physical nature of information signals might also raise some novel corporate jurisdictional issues. For example, because they are physical and expressly identifiable in the environment, might a corporation's transmitting them as advertising, communication, or the like into a jurisdiction in which it is not qualified to do business expose itself to legal and other requirements, just as if it had owned tangible property there? An affirmative answer to his question could support the current effort of various states to start collecting sales taxes on interstate deliveries of goods bought online.
Ubiquity: Some politicians think that the law is a program governing the behavior of our social system and making laws is like programming. Stating laws as algorithms (instead of clear principles) is valid law making according to this view (e.g., when Congress reaches a compromise about the tax code that cannot be explained with a clear set of principles). Do you agree?
RF: Programs are simply instructions for performing a process or operating a system. The failure of lawyers to grasp that fact deterred the U.S. Supreme Court initially from acknowledging that computer programs might be patentable as processes under appropriate circumstances. Laws have many algorithmic properties. As such, they should be drafted, or written, as precisely as possible, but in a manner that people ordinarily can understand and comply with. Moreover, laws are written based on past experience and their future effects cannot be predicted with certainty. Hence, deficiencies in even the best of them must be accepted and dealt with ad hoc, like all language expressions.
Ubiquity: The way you understand "algorithm" for the law seems to clash with the way computer scientists understand "algorithm". Some experts think there is no precise definition of algorithm for a social system. In computer science an algorithm computes a definite function, with simple unambiguous instructions that require no interpretation. In a social system there is no definite function, no precise unambiguous notion of an instruction, and no precise unambiguous notion of control or execution. Should we understand algorithm for the law as a possibly misleading metaphor? Would it not be better for the law to state clear principles for people to follow, no matter what "algorithm" they are trying to apply to follow the law.
RF: I believe that algorithms, as statements of processes or series of steps, can exist for different purposes, actors, and machines. Hence, they can be either descriptions of processes for computers or prescriptions of behavior for people, recognizing their very different operation. Hence, they literally control computers, because computers lack human creativity and rigidly obey instructions. Because people are unpredictable by being creative and having free will, they are able to interpret, or disregard, the algorithmic (instructional) statements of many laws, such as the Internal Revenue Code, and risk the consequences.
Most language statements are ambiguous and open to different interpretations, because each mind is inherently subjective and both susceptible to ideologies and protective of its individual mind set, point of view, or ego by resort to the instinct, or drive to survive. While algorithms might lack a desired predictability in social systems, because of the equivocal operation of the mind, they still seem to be a needed means for trying to achieve social order. Otherwise, there would be chaos, like a lack of traffic lights at intersections. They are the best that can be achieved. By the way, during WWII and before I knew about computers, I drafted regulations at the Petroleum Administration for War to govern the petroleum industry by appending much-appreciated illustrative flow diagrams.
Ubiquity: The film "The Final Cut" speculates about the legal implications of brain implants that could record every sensory experience that a person had during life. Is the scenario there one we should be concerned about? How does it relate to your ideas about the mind and the law?
RF: I find it interesting that neuroscientists are presently exploring the feasibility of miniature electronic mental implants for diverse purposes, for example, to remedy Parkinson's disease symptoms. However, that general effort obviously has many diverse legal implications, and possibly unpredictable consequences, as human minds undertake to tinker directly with the mind. I think that those medical efforts should be guided rigorously by the traditional axiom "Do no harm!"
Gil Press has worked as a research and marketing executive at NORC, Digital Equipment Corporation, and EMC Corporation. He is Managing Partner of gPress, a social sciences and market research consultancy.
©2012 ACM $10.00
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2012 ACM, Inc.