bluesweet 发表于 2004-9-1 00:00:22

关于科技发展与个人隐私的 希望有用

Is invasion of personal privacy the inevitable result of technological advancement? Philip E. Agre, associate professor of information technology at the University of California at Los Angeles (UCLA), addresses this question in a December 1998 article from Encarta Yearbook. Agre argues that a combination of governmental and private measures is necessary to safeguard privacy as new technologies become part of everyday life.

Personal Privacy in the Digital Age

By Philip E. Agre


It is the year 2010, and your car is hooked up to the Internet. As you drive, you receive updates and instructions that reflect changing traffic conditions monitored by video cameras and satellites. Your mechanic is able to monitor your engine remotely and alert you if there are signs of a problem. Your entire home music collection is available on the car stereo.


But these conveniences come at a price. Your insurance company also tracks your movements, making sure you obey all speed limits. You receive endless personalized advertisements for the businesses that you drive past. The police have noticed that you often drive through a bad part of town and have started a file on you.


This scenario is entirely plausible, and the technology is already available or soon will be. But will it actually happen? Is invasion of privacy the unavoidable consequence of technological progress?


Hundreds of today's emerging technologies have privacy implications, and many of them, such as wireless data communications, have already become cheap enough to be used on a large scale. Once these technologies become commonplace, it will be nearly impossible to change them. For this reason, taking measures to protect privacy should be high on the agenda of societies throughout the world.


Why should we care about a possible loss of privacy? What are some of the potential impacts when our privacy is breached? What data trails does a person create in modern society? How important is the Internet, with its booming demand for online shopping and its free flow of information, to these concerns? What steps can individuals take to control access to data regarding their personal lives and thus protect their privacy?

Personal Information and Technology


In the modern world privacy issues constantly arise with the collection and dissemination of digitized personal data. This data, the computerized transfer of information by a myriad of devices, has become a routine part of our lives. We exchange this type of data when withdrawing money from an automated teller machine (ATM), borrowing a book from the library, or sending electronic mail (email) on the Internet. Computers also affect our lives in a thousand indirect ways: the bills we get in the mail, the logistical systems that get groceries into the store, telephone networks, and more.


To understand the privacy issues that information technology can raise, it is important to understand what computers are and how they are designed. Information technology originated in military and business environments as a way of automating existing practices, such as calculating missile trajectories and scheduling factory operations. As the technology matured, companies such as International Business Machines Corporation (IBM) shaped modern software engineering by drawing on the methods of industrial automation and the language of bureaucracy. The computerized files that are the focus of privacy concerns today are directly descended from the paper files of the past.


Computers are, above all, representational machines—they manipulate internal patterns of data that represent people, places, and conditions in the outside world. Some data represents the past—for example, when an accounting program keeps records of financial transactions. Other data represents the future, as when a computer simulates the economic impact of a proposed change in taxes. Yet other data represents the present, such as when a tracking device attached to a truck keeps the trucking company informed of its location.


Central to the design of any computer system, therefore, is a careful analysis of what sorts of things need to be represented. A designer may decide that a particular system needs to represent people, cars, employees' tasks, and so on. The next step is to decide which attributes of these things need to be represented: an employee's name and job title, the type and location of a car, the inputs and outputs of a task, and so on. Only then is it possible to specify the procedures the computer should follow.


The best-designed computer system, however, is useless without a supply of accurate input data. So designers must also provide their machines with the technical means of “capturing” the data. Early computers used simple mechanisms such as keyboards: A person would manually type in the necessary data. Today, however, computers can capture data through an enormous variety of mechanisms. These mechanisms include bar code scanners, tracking devices, and wallet-sized cards with magnetic strips. Some systems also use microphones, cameras, and more exotic kinds of sensors.

Streams of Data


As a person goes through the day, therefore, representations of his or her activities are continually being captured by computer input devices. Restaurant orders are entered into point-of-sale (POS) terminals, which calculate the bill but also detect patterns in customer dining habits. Medical personnel create detailed records of interactions with patients, thereby assisting future caregivers but also permitting oversight by insurance companies. Email messages at work are filed for easy searching, but also for easy reading by supervisors. Credit card systems capture details of purchases, easing both payment and subsequent marketing.


Few of these databases are unknown to consumers, who can see a grocery store's scanner in operation and who most likely realize that a computer prints their electric bill. Even so, few people understand the consequences of all of this data being captured, accumulated, and passed along. It is a complicated matter. For example, data can only be abused if it is individually identifiable—that is, if the computer knows who you are. If you pay by cash in a restaurant, the records in the POS terminal have no way of connecting your identity to the food you ordered. But if you purchase groceries using a grocery store “club” card, you have identified yourself and made it possible for your data to be personally identified and thus possibly manipulated and abused.


Because of the dangers posed by individually identifiable information, most organizations take steps to prevent abuse. Computer security, for example, includes numerous measures to prevent data from being used in unintended ways, whether by outsiders “cracking” a password mechanism or by insiders who might be paid by private investigators to retrieve individual records. Responsible organizations also establish clear data-handling policies, such as stringent password requirements, and train their employees to follow them. But these measures are hardly foolproof. Few abuses of personal information leave obvious signs that would tip off victims. As a result, the dangers are hard to measure, and few organizations have adequate incentives to take the necessary precautions.


Even when security is tight, the most significant dangers to privacy derive from uses of personal information that are consciously chosen. Flows of personal data that are initiated for one purpose are often used for other purposes later on. The most important of these secondary uses of personal information involve the merger of databases from different sources. Records of your supermarket purchases, for example, will be more valuable to marketers if they can be merged with demographic information about your background and lifestyle. By combining the data you generate at the supermarket with, for example, information gleaned from your credit card purchases—where you buy your clothes, rent your videos, and go out to eat—a well-defined profile of your personal tastes could be developed and used for future marketing.


In order to merge different databases, however, each database must use the same identifier (a number that has been assigned uniquely to you). In the United States, the identifier most often used is the social security number (SSN). A recent congressional initiative to require states to link social security numbers with driver's licenses, designed to help control illegal immigration, was put on hold in November 1998 after significant citizen outcry. Privacy advocates oppose the creation of a national identification card in the United States, which would allow databases to be merged on a large scale.


On the other hand, no major privacy problems arise from aggregate data—statistical trends that are calculated from thousands of individual records. There are many benefits to this type of information, from deducing the causes of illness to analyzing what types of products are most in demand. This type of personal data collection means that people are more likely to be alerted to a public health problem, and it is more likely that the book you want will be in stock and the sweater you like will come in your favorite colors.

Privacy Violations and Crime


The large amount of personal data floating around in society today leaves individuals open to having their privacy violated, sometimes with dire consequences. Incredibly, a stolen social security or credit card number is often all that is needed to perpetrate identity theft, a type of fraud in which a criminal assumes the victim's identity to obtain illegal credit and run up huge debts.


Statistics are uncertain on this emerging area of crime, but one estimate by the U.S. Secret Service, which tracks major cases of identity theft, indicates that this type of crime was responsible for $745 million in losses in 1997, nearly $300 million more than the previous year. Credit companies say fraud inquiries have soared in the 1990s to about 500,000 cases annually. Credit laws typically limit direct financial losses to the victim, but correcting credit records and other corrupted information can consume a victim's life for years afterward and cost thousands of dollars.


Medical records are another highly sensitive type of information that is ripe for abuse. Often assumed to be highly confidential as part of the patient-doctor relationship, electronic medical data in the United States actually has little in the way of privacy regulation. The rise of large managed-care health organizations and the tight connections between drug companies, drugstores, and intermediary companies known as prescription benefit managers (PBMs) have changed the way patient medical information is used.


It used to be that someone filling a prescription at the local pharmacy could assume a certain measure of confidentiality. Today, the same consumers could find themselves receiving letters from the PBM telling them when and how to take their medication, enrolling them in a special program, or informing them that they have been switched to a lower-cost prescription. PCS Health Systems, a PBM owned by the giant drug maker Eli Lilly and Company, covers 56 million people and has a total of 1.5 billion individual prescriptions in its database. Although most people assume that this information is confidential, in fact the companies can use the information with few legal restraints.


In addition to intrusive marketing and general concerns about medical privacy, employees face particular risks if medical records are available to their employers. There are many accounts of employees that have been reassigned or fired when supervisors learned of a medical condition by accessing medical records. People suffering from acquired immune deficiency syndrome (AIDS) can suffer particular harm if their medical status is disclosed, but even employees seeing a therapist for depression or another mental condition can face repercussions if their treatment is disclosed. Although definitive data are hard to come by, a 1996 study by David Linowes, a professor of political economy at the University of Illinois, showed that one-third of Fortune 500 companies responding to a survey had utilized individual medical records in making job-related decisions.


A particularly pertinent example of how personal data can be seriously misused came in 1997 when a 36-year-old U.S. Navy sailor was threatened with expulsion from the military because he was linked with an America Online (AOL) personal profile that said he was homosexual. Timothy R. McVeigh (no relation to the convicted Oklahoma City bomber) had filled out the AOL profile indicating he was gay using only the name Tim. But naval investigators found the profile and obtained McVeigh's full name from AOL's customer service department, in apparent violation of AOL's own written privacy policies. McVeigh sued the Navy and won a settlement in June 1998 that allowed him to retire with full benefits and an undisclosed sum. AOL admitted it made a mistake and agreed to pay an undisclosed sum in damages. AOL also vowed to conduct employee training on privacy issues.


Inadequate protection of private information can even threaten personal safety. An actor named Rebecca Schaeffer was killed in 1989 by a deranged fan who obtained her address through a private investigator. To get the address, the investigator had simply called the California Department of Motor Vehicles. After this incident California passed laws restricting access to its motor vehicle records, but of course this is just one source of personal data. Women escaping from domestic violence are particularly vulnerable, and must go to great lengths to prevent their assailant from using public records or illicitly obtained private data to track their hiding places.

Internet Privacy


A new level of concern over abuse of informational technology has accompanied the rise of the Internet. Although email is not inherently private, some guarantees of privacy can be obtained by encrypting the contents of electronic messages. Emerging technical standards could make such encryption routine in the future.


The World Wide Web, however, is a more complicated story. Web sites typically use simple data files called cookies to maintain detailed records of individuals' movements from one Web page to another. In practice, however, cookies resemble pseudonyms—false names adopted just for the purpose of browsing that one Web site. A Web site therefore cannot know a user's identity without explicitly asking for it. In this sense, the Web's current architecture is inherently friendly to privacy, although this situation could easily change as that architecture evolves.


However, some Web sites require registration before a user can access the site, potentially leading to sales pitches and other appeals. Sites targeted at children have come under particular scrutiny for collecting detailed personal data from naïve users. The Web can seem like an innocuous, friendly place, but the site you furnish with your credit card number could be based in a country where U.S. fraud laws do not apply. An organization such as the nonprofit consortium TRUSTe can certify a site's privacy policy, but this approach is relatively new and still unproven.

Fear of “Big Brother”


Threats to privacy can also arise from abuse of personal information by the government. Historically, the most important threat to privacy has been political oppression. Computers emerged in the years during and immediately after World War II (1939-1945). Secret police organizations and their networks of listening devices and informers were common, most prominently in the totalitarian states of Nazi Germany and the Union of Soviet Socialist Republics (USSR), and to a lesser but still significant extent in the United States and other democracies. British writer George Orwell described a culture of constant surveillance in his novel 1984 (1949), and Orwell's all-seeing “Big Brother” has become a metaphor for privacy invasions of all kinds.


The “Big Brother” concept of a centralized, publicly visible surveillance system is misleading for modern purposes, however. Contemporary dangers to privacy are primarily decentralized—they emerge from a combination of databases representing many different aspects of life. These databases are usually created independently of one another, and they are often incompatible. When contemporary governments do engage in systematic surveillance, as in the case of the U.S. National Security Agency's Echelon system for intercepting electronic communications, they generally do so secretly rather than openly. Of course, the vast amount of personal data collected through modern technology only makes such covert surveillance easier.

Privacy Solutions


Potential solutions to the loss of informational privacy can be grouped into three areas: regulation, technical measures, and individual action. Protecting privacy is such a complicated and difficult task that any workable solution will have to address all three areas.


The leading model of privacy regulation worldwide is known in the United States as fair information practices and in most other countries as data protection. This model originated in the late 1960s as countries throughout the industrial world built centralized file systems to support their welfare states. The potential for abuse of these files was obvious to everyone, and policy makers in several countries—especially Germany, Sweden, and the United States—articulated a set of principles based on individual rights. These principles include the right to know what databases exist, the right to know what the collected information will be used for, and the right to have false information corrected.


In the United States, these principles were incorporated into the 1974 Privacy Act. However, that law applied only to the government and has loopholes that make it ineffective in practice. Since that time, the United States has passed a fragmentary set of industry-specific privacy laws, but industry concerns over limiting technological development and hampering commerce have prevented any more generalized legislation from being passed. For the most part, therefore, the government has allowed private companies to regulate themselves. In the area of medical records, the 1996 Kennedy-Kassebaum health care legislation, named for Democratic Senator Ted Kennedy of Massachusetts and then-Senator Nancy Kassebaum, a Republican from Kansas, mandates that the government must have medical privacy regulations in place by mid-1999. How strong those regulations will be is uncertain.


The Europeans have adhered to a stricter privacy standard, believing that informational privacy is a human right and recalling the abuses of personal data by the Nazis during World War II. Europe has applied its data protection principles both to government and to private industry. The European Union (EU) recently gave these principles constitutional status in the Data Protection Directive, which all EU member countries must implement. The agreement, which took effect in October 1998, caused concern in the United States because it prohibits trade with any nation that does not have adequate privacy laws. Negotiations on this issue between the United States and the EU were ongoing.


New technologies, the source of much privacy concern, can also be used to protect privacy. Because privacy problems arise when information is individually identifiable, cryptographic methods can be used to disguise identity. Digital cash systems, for example, can take the place of credit cards and operate as anonymously as ordinary cash, or they can be designed to reveal an identity only with the payer's permission or with a court order. Similar methods can be employed to support anonymous email or digital pseudonyms that prevent information from being merged by different organizations without the individual's permission.


Even without these protections, individuals can act to protect their own privacy. In some cases, market forces can create incentives for companies to protect privacy if consumers consistently call for such protection. Consumers should study a company's privacy policy in its promotional literature. If such a policy is weak or nonexistent, it is reasonable to assume that the company uses any personal information it captures in every way it legally can.


Consumers can also protect their privacy when faced with what they consider excessive requests for information by asking why the information is pertinent or by refusing to answer the questions. Finally, individuals can take initiative by informing other citizens about specific privacy problems. Many threats to privacy remain unpublicized simply because there are too many of them for existing privacy advocates to track. Even simple research on an unpublicized privacy problem can have an impact when submitted to a relevant Internet forum, watchdog group, or media outlet.

At the Crossroads


The spread of information technology has made the world a less private place. Computers that may be used to invade personal privacy can also be used to protect it. The Internet might have the potential to become an omnipresent network of surveillance, but it is already a worldwide forum for education, debate, and advocacy on privacy issues.


Nothing is set in stone at this point—everything depends on the choices that society makes over the next few years. Technologists can choose to incorporate privacy protection in future devices and systems. Consumers can choose to educate themselves, to assert their rights, and to become activists for sensible privacy protection. Policy makers can explore the combinations of measures that can protect privacy or can undermine it. If we so choose, we can enjoy the benefits of new information technologies while also preserving privacy.


About the author: Philip E. Agre is an associate professor of information studies at the University of California, Los Angeles (UCLA). He is the coeditor of Technology and Privacy: The New Landscape and the author of many articles on information technology.


Further reading:


Agre, Philip E., and Rotenberg, Marc, editors. Technology and Privacy: The New Landscape. MIT Press, 1997.


Alderman, Ellen, and Kennedy, Caroline. The Right to Privacy. Knopf, 1995.


Bennett, Colin J. Regulating Privacy: Data Protection and Public Policy in Europe and the United States. Cornell University Press, 1992.


Cavoukian, Ann, and Tapscott, Don. Who Knows: Safeguarding Your Privacy in a Networked World. Random House, 1995.


Givens, Beth, and Fetherling, Dale. The Privacy Rights Handbook: How to Take Control of Your Personal Information. Avon, 1997.


Regan, Priscilla M. Legislating Privacy: Technology, Social Values, and Public Policy. University of North Carolina Press, 1995.


Rothfeder, Jeffrey. Privacy for Sale: How Computerization Has Made Everyone's Private Life an Open Secret. Simon and Schuster, 1992.


Smith, H. Jeff. Managing Privacy: Information Technology and Corporate America. University of North Carolina Press, 1994.


Source: Encarta Yearbook, December 1998.
? 1993-2003 Microsoft Corporation. All rights reserved.
页: [1]
查看完整版本: 关于科技发展与个人隐私的 希望有用