SOFTWARE (INTELLIGENT) AGENTS
Since the early 1990s, software agents—also known as intelligent agents, knowbots, softbots, or bots for short—have been the subject of a great deal of speculation and marketing hype. This sort of hype has been fueled by “computer science fiction”—personified images of agents reminiscent of the robot HAL in Stanley Kubrick’s movie 2001: A Space Odyssey or the “Roboboy” Dave in the movie Artificial Intelligence. As various chapters in the text demonstrate, software agents have come to play an increasingly important role in EC—providing assistance with Web searches, helping consumers comparison shop, and automatically notifying users of recent events (e.g., new job openings). This appendix is provided for those readers who want to learn a little more about the general features and operation of software agents in a networked world such as the Web.
WHY SOFTWARE AGENTS FOR EC, ESPECIALLY NOW?
For years pundits heralded the coming of the networked society or global village. They imagined an interconnected web of networks linking virtually every computer and database on the planet, a web that science fiction writer William Gibson dubbed the matrix. Few of these pundits envisioned the problems that such an interconnected network would bring. One exception was Alvin Toffler, who warned in his book Future Shock (1970) of an impending flood, not of water, but of information. He predicted that people would be so inundated with data that they would become nearly paralyzed and unable to choose between options. Whether or not that has occurred is an open question. There is no doubt, however, that today’s world of networked computers—intranets, Internet, and extranets—has opened the floodgates.
Consider some simple facts (Harris 2002): ◗ In 2001, it was estimated that over 10 billion (non-spam) e-mail messages were sent per day.The figure is expected to grow to 35 billion messages per day by 2005. ◗ Regardless of the metric used (e.g., growth in the number of networks, hosts, users, or traffic), the Web is still growing rapidly. In 2001, the public Internet contained 550 billion pages and was increasing at a rate of approximately 7 million pages per day. ◗ The amount of unique information being produced worldwide is doubling every year. In 2001, the world created an estimated 6 exabytes (1017 bytes) of new information. In 2002, the figure was 12 exabytes. Taken together, that is more information than was accessible in the entire 300,000 years of human history.
Unfortunately, end users are often overwhelmed. They spend most of their time navigating and sorting through the available data, spending little time interpreting, and even less time actually doing something about what they find. The end result is that much of the data we gather goes unused. For example, according to the Gartner Group (Kyte 2002): ◗ The amount of data collected by large enterprises doubles every year. ◗ Knowledge workers can analyze only about 5 percent of the data. ◗ Most of knowledge workers’ efforts are spent trying to discover important patterns in the data (60 percent or more), a much smaller percentage is spent determining what those patterns mean (20 percent or more), and very little time (10 percent or less) is spent actually doing something based on the patterns. ◗ Information overload reduces knowledge workers’ decision-making capabilities by 50 percent. What is the solution to the problem of data overload? Paul Saffo, director of the Institute of the Future, asks, how do we reduce “the flood of data to a meaningful trickle?” (Saffo 1989).
DELEGATE, DO NOT NAVIGATE
As far back as 1984, Alan Kay, one of the inventors of Windows-based computing, recognized the problems associated with point-and-click navigation of very large data repositories and the potential utility of “agent-information overload.” More recently, Nicholas Negroponte, director of MIT’s...
Please join StudyMode to read the full document