We Are More Than Data: LJ2
Writing Prompt
- Respond
to the We Are Data reading.
- What
did you think of it?
- Somethings
you might want to address:
- how
does knowing this affect your personal use of social media and/or the
internet,
- how
might knowing this affect your use of social media in a professional
capacity,
- do
you have ideas about the ethics of this,
- other
things i haven't listed.
My awareness of how social media algorithms assign identity without my knowledge or capacity to intentionally modify is vital for myself and potential employee of a businesses that use social media to look for me, after reading "We are Data." My online data is allocated by category meaning through various types of algorithmic processing without my active participation, awareness, or frequently assent.
As software developer Maciej Ceglowski explains, “The proximate reasons for the
culture of total surveillance is clear. Storage is cheap enough that we can keep everything. Computers are fast
enough to examine this information, both in real time and retrospectively. Our daily activities are mediated
with software that can easily be configured to record and report everything it sees upstream.” (Ceglowski, 2015)
And these identifications have an impact on my life
whether I am aware of it or not. My search results will be categorized and aged
as a consequence.
I think that technical firms, private businesses, and
governments scrutinize the many aspects of who I am online and what that
identification means. And each of these category identities is unconcerned
about who I am as a person, instead focusing on the search criteria of my own
past and sense of self. Knowing this has an major impact on how I use social
media and the internet. All of my ex-girlfriends love spying on me Facebook so
I deleted my account. I decided to only utilize LinkedIn and only have a hand
full of friends, whom I know are good individuals.
In terms of ethics, I believe that these machines are
and have been "wrong" a lot of the time—often more than computing
makes it appear to admit. Even the most costly computer software may fail, and
despite their proclaimed infallibility, authentication schemes routinely
perform badly.
Our algorithmic personalities arise as a result of a
continual interaction between our data and the algorithms that analyze it.
I believe that software is unpredictable and that
blunders might happen. This has the potential to result in individuals being
imprisoned unfairly, which is a serious problem. False negatives and false
positives are two sorts of mistakes that facial recognition software might
make. A false negative emerges whenever the algorithm fails to satisfy a person's
face inside its database. When the system identifies the image but the
connection is invalid, this is considered to be a false positive. Both of
these vulnerabilities have the capability to boost key problems for both
organizations that utilize software and the general population.
Reference:
Ceglowski, M. (2015, September 14). “What Happens Next Will Amaze You,” FREMTIDENS Internet Conference, Copenhagen, Denmark. Idle Words. Retrieved January 31, 2022, from http://idlewords.com/
Comments
Post a Comment