Site Loader

1.1  Introduction

Paris 1948: The United Nations General Assembly met
to vote on the 30 Articles of the Declaration of the Human Rights (United Nations Draft Committee, 1948). Article 1 states
that all humans are born to equal and free rights. Article 12 states that no
one should be the subject of any interference with his/her privacy. The role of
Information Systems has changed in the past decade due to advances in
technology.  With that brings a range of
unprecedented challenges to the privacy and security of personal data of
individuals. Information
Systems are a network of components that collaborate together to “collect,
process, store, and disseminate information to support decision making,
coordination, control, analysis and visualisation in an organisation” (Laudon
& Laudon, 2011). Privacy is the “claim of individuals
to be left alone, free from surveillance or interference from other
individuals, organisations or the state” (Laudon & Laudon, 2011). BG(1 

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

 

1.2  Role of Information Systems in
relation to privacy

Individuals in the past 20 years have often
expressed their outrage at the violation of their digital privacy. For example,
the National Security Agency participates in data mining across the globe,
collecting sensitive personal information (Gorman, 2008). Even today, governmental and
commercial bodies continue to violate privacy laws around the world in order to
extract and collect data for reasons such as profit maximisation, and national
security. The NSA follows a similar pattern of corporations challenging the
rules of data and privacy law on an almost daily basis.

 

There are five moral dimensions of the information
age that concern privacy. Information rights and obligations describes the extent
to which individuals are legally protected for the rights to their information.
Property rights and obligation refers to the difficulty of tracing back and
ownership in the digital society that we have today, but equally how easy it is
for corporations to ignore these property rights. The third dimensions,
accountability and control I similar to property rights in that who should be
held responsible for any harm done to individuals and their rights. Legally,
there should be rules and regulations regarding the systems used, so systems
quality is the fourth moral dimension. And finally, the fifth dimension is
quality of life. This is more related to the ethics of Information System
process and the reasons why organisations want to collect our data.

 

The five moral dimensions of Information
Systems all link together in a way that if one is altered then consequently the
rest are also affected, in a ripple-effect. The model represents a society
currently in equilibrium (Web 1.0). The rise of the Web 2.0, and subsequently 3.0,
caused ripple effects that destabilised society. The cultures and values that
previously influenced the perceptions and behaviours of the individuals in the pond
now changed with the development of the Web. Therefore, the reaction is
prolonged since it takes time for people to adjust (and adopt) new values,
especially those relating to privacy. Organisations recognise this and so make
every last effort to contain their privacy methodologies and policies so as to
limit the negative effect on their user and consumers.

 

Currently, there are four components of Information
Systems; hardware, software, procedure, and people. Data has bridged the gap between software and the procedure
undertaken by the users on Information Systems. Data is collected and analysed
for many reasons but unfortunately this data can be used for either
illegitimate or (and) unethical purposes. Big data is the name given to agnostic
data sets that are so large that previous analysis techniques and methods used
10 years ago are now inadequate, because
we have become so interconnected that Information Systems have had to process new
types of data, such as that taken from social network sites (SNS). Since the
take-off of the Web 2.0, social network organisations have been able to
quantify human behaviour and predict future behaviour (Mayer-Schönberger & Cukier, 2014), known as
datafication. These social interactions are then repackaged and sold to
advertisers. McAfee and Brynjolfsson (2012) suggest big data can be used to
improve customer relationship management, and Zalmanson (2013) suggested that there are possibilities of new business
models based on prediction using social data from social media platforms.  Predicting future patterns of consumer habits
and behaviours allows corporations to create demand. This makes big data
extremely viable and sought after. Datafication regarded as a legal method of access,
understanding, and monitoring of human behaviour. Yet, dataism is based on the
important concept of trust between individuals and the organisations that
collect, interpret, although issues arise through semiotic translations (Constantiou & Kallinikos, 2015). This quality of
trust is important when data is used for dataveillance (Raley, 2013).
They also analyse and share this private information with third parties (Dijck, 2014).

 

Organisations that collect data from individuals must
use data analysis in order for it to be useful. One new method of data analysis
inspired by the growing use of networks and communication technologies used
today is known as the Nonobvious Relationship Awareness (NORA). Information
Systems take many different forms such as search engines, office automation,
and enterprise system. Therefore, organisations and companies struggle to
combine information from these disparate origins. NORA continuously uses this
data through matching and merging, to identify relationships with applications
in national security.

 

Transparency is an aspect of privacy with regards
to Information Systems that changes varies between organisations. There are a
number of inaccuracies when referring to the degree of digital visibility in
the networked digital age. Firstly, the concept of visibility is one that is
passive, requires little effort, and is impartial. Governmental and commercial
organisations alike use personal data gathered from almost every individual to
categorise, profile, and group individuals, using NORA for example. Marketing
and advertising products and services can therefore be personalised for each
individual. Feedback on the success or failure of these adverts can then be
measure too. Governments require data
from individuals for national security and collect demographic data (e.g. National
census) for future policies and amendments. How many times have I made reference to govts collecting data on
demographic data without actually referencing them?????

 

As well as the obvious information dimension,
privacy also has a spatial dimension. The spatial dimension of privacy refers
to the experienced visibility of individuals. For example, people in public
expect to be noticed by others in the same space. This extends to information
technologies as, which is currently not recognised as an issue by privacy law
at the moment but Cohen argues that it should (Cohen, 2008).
Cohen goes on to say that surveillance is permissible; in public spaces (Branscomb,1994,
identifies the Internet as a public space) in spaces owned by third parties
(e.g. Facebook), but not within user-generated
spaces aren’t monitoring the activity of others (e.g. blogs).

 

Foucault’s
panoptic presence theory (Foucault, 1979) used in prisons can be applied to the
idea of privacy as it illustrates the “constant gaze” of organisations on the prisoners
(people). The basic principle of panopticism is that a central guard tower
consistently watches its prisoners. Foucault describes a disciplinary mechanism
as a confined space that contains all individuals (the Internet of Things and
the Internet) that are under constant supervision, and all activity is recorded. A lone
hierarchal organisation/individual exerts power and influence distributed
without division across the network of these connected individuals. In this way,
the ubiquitous connection of Information Systems has influenced privacy.

 

The US’s Federal Trade Commission implemented the
Fair Information Practice Principles (FIPPs) in 1973. The FIPPs are a set of
guidelines that websites must adopt to protect America’s consumers. The two
core principles are awareness and consent. Users should be able to decide how
their information is being used and equally as important, websites should make
it clear if and how they collect data from those users. Participation,
security, and enforcement are further guidelines but are not core principles.
It could be argued that enforcement is as important as awareness and consent
since no enforcement renders all of the principles obsolete! Online copyright
enforcement is becoming increasingly associated with online privacy according
to Cohen (2003). In order to prevent the rise of unauthorised copies, copyright
holders are integrating a number of potential privacy-infringing devices
digital rights management technologies.

 

Zhang et al (2005) suggested that incorporating a
human computer interaction within the systems development life cycle (SDLC)
would bridge the gap between man and machine, thus enriching the user
experience. The progression of the Internet and Communication Technologies has
allowed more users to interact with devices but Zhang argues that during the SDLC
of Information systems humans should play a larger role than designing them.
Perhaps this new human-centred Information System could spell disaster for privacy
advocates since the acquisition of personal information will be interpreted by
an individual rather than an algorithm like Information Systems now.

 

There are some technical and legal solutions to
combatting privacy issues. For example, web beacons monitor user’s activity
such as cursor movements on each webpage in real time. Cookies are files
containing data about stateful information of the user which is then sent to
the web developer. This stateful information includes a list of items in an
online shopping cart or logging-in credentials. There are some tools to prevent
personal data from being taken. The Platform for Privacy Preferences (P3P)
allows e-commerce websites to communicate their privacy policies to their
visitors. This can also compare different privacy policies such as FIPPS. Privacy
campaigners promote the use of ‘opt-in’ policies on all websites and Do Not
Track lists. countermeasures–privacy
enhancing technologies.

 

The Internet of Things is the concept of connecting
devices to one another, embedded intelligence, and communication technologies,
so that data analysis, value, and sensing can take place. This data is then
used to generate human value in the form of applications, and benefits for both
stakeholders and users. Smart cities were inspired by Web 2.0. Web 2.0 is the
transition of webpages to user-generated content and the increase in usage of
social media. Smart Cities are linked to systems theory, which describes the
connected, networked infrastructure of cities to produce performance
statistics, and allow for more efficient decision making. Information Systems
have developed to accommodate for this need, and so, in turn, have developed
more efficient and cost-effective methods of storing and analysing data.

Smart cities are cities that are integrated with Information Communication
Technologies to aid the development of the city. In 1987, Dutton et al. labelled
these as “wired cities”. Now, Komninos (2006) states that collaborative
knowledge networks are used in “intelligent
cities”. Komninos goes on to describe the 3 tiers of intelligent cities. Information
Systems fall under which creates a “virtual innovation environment”. These
virtual innovation environments use “collective strategic intelligence” to gather
and assess data. Komninos makes a point of differentiating collective strategic
intelligence (CSI) and business intelligence (BI), the primary source of data
for organisations Web 1.0. Web 1.0 is the name given to Internet in its
infantile development. Almost all web users at the time were consumers and
there was very little user generated content. Web 1.0 developed in to Web 2.0
was centred around social networks and more content was being created by both
corporations and users alike, such as Myspace and Facebook for example. Unlike
BI, CSI uses a group of organisations that share information from internal
sources and not enterprise resource planninBG(2 g and customer relationship
management. This bottom-up approach to information collection and analysis
catalysed the latest developments in Information Systems theory that merge
governments and technologies (Deakin, et al., 2011). 

 

Privacy paradox- increase connectivity=decrease privacy

1.3 Conclusions

The past 10 years have seen a dramatic change to
all factors of Information Systems, from hardware to the people that use them
and the processes that they adopt to develop the Systems. Organisations have
now been able to access, collect, analyse, and interpret metadata from multiple
sources, thanks to the development of Information Systems. As we all become
more connected, organisations retrieve more information, seemingly oblivious to
those individuals, which is then used for marketing, research and development,
and national security purposes. Organisations and commercial corporations frequently
challenge data and privacy laws with Information Systems processing new data
that bridges the gap between people and the software of the Systems. SNS’s have
connected us but at the same time taken our privacy away by being able to use
personal information and activity to quantify and predict social behaviour
using sophisticated new methods such as NORA. This form of dataism is based primarily
on trust, forcing web developers to become more transparent with their privacy
policies, especially with regards to the spatial dimension of privacy.

Post Author: admin

x

Hi!
I'm Erica!

Would you like to get a custom essay? How about receiving a customized one?

Check it out