In "10 ethical issues raised by IT capabilities," we examined ethical issues raised by IT capabilities, issues that all of us as technology professionals need to consider as we go about our duties. This time, we take a look at ethical issues more specific to management--and not necessarily just IT management. Once again, one of our themes is that advances in technology, just like advances in any other area of endeavor, can generate societal changes that should cause us to reexamine our behavior. The dynamic nature of civilization means some components of ethical codes that were perfectly appropriate in previous generations may no longer apply. Although space limits us to 10 issues, the ones we examine here are based on five main categories of particular interest to technologists: privacy, ownership, control, accuracy, and security. As in the previous article there are more questions than answers.
Governments collect massive amounts of data on individuals and organizations and use it for a variety of purposes: national security, accurate tax collection, demographics, international geopolitical strategic analysis, etc. Corporations do the same for commercial reasons; to increase business, control expense, enhance profitability, gain market share, etc. Technological advances in both hardware and software have significantly changed the scope of what can be amassed and processed. Massive quantities of data, measured in petabytes and beyond, can be centrally stored and retrieved effortlessly and quickly. Seemingly disparate sources of data can be cross-referenced to glean new meanings when one set of data is viewed within the context of another.
In the 1930s and 1940s the volumes of data available were miniscule by comparison and the "processing" of that data was entirely manual. Had even a small portion of today's capabilities existed, the world as we now know it would probably be quite different.
Should organizations' ability to collect and process data on exponentially increasing scales be limited in any way? Does the fact that information can be architected for a particular purpose mean it should be, even if by so doing individual privacy rights are potentially violated? If data meant for one use is diverted to another process which is socially redeeming and would result in a greater good or could result in a financial gain, does that mitigate the ethical dilemma, no matter how innocent and pure the motivation?
This is an issue with both internal and external implications. All organizations collect personal data on employees, data that if not properly safeguarded can result in significant negative implications for individuals. Information such as compensation and background data and personal identification information, such as social security number and account identifiers, all have to be maintained and accessed by authorized personnel. Systems that track this data can be secured, but at some point data must leave those systems and be used. Operational policies and procedures can address the proper handling of that data but if they're not followed or enforced, there's hardly any point in having them. Organizations routinely share data with each other, merging databases containing all kinds of identifiers.
What's the extent of the responsibility we should expect from the stewards of this data? Since there's no perfect solution, where's the tipping point beyond which efforts to ensure data can be accessed only by those who are authorized to do so can be considered reasonable and appropriate?
Many people are required to sign NDAs (nondisclosure agreements) and noncompete clauses in employment contracts, legal documents that restrict their ability to share information with other future employers even to the point of disallowing them to join certain companies or continue to participate in a particular industry.
What about the rest of us, who have no such legal restrictions? In the course of our work for...
Please join StudyMode to read the full document