Press "Enter" to skip to content

Annotated Bibliography

This annotated bibliography represents a summary of the various topics that I have researched and engaged with this term.

Banister, Cyan, and Alex Hertel. “We Love Augmented Reality, but Let’s Fix Things That Could Become Big Problems | TechCrunch.” Tech Crunch. Accessed April 8, 2019.

This article discusses several major issues with AR that it claims should be addressed before they become serious threats to safety and order. It discusses how AR could lead to privacy violations, property disputes, and even physical harm. Lots of these claims seem like worst case scenarios, but they are still worth thinking about and anticipating with AR development to ensure they are avoided.
Bowles, Nellie. “A Dark Consensus About Screens and Kids Begins to Emerge in Silicon Valley.” The New York Times, October 26, 2018, sec. Style.

This article discusses how Silicon Valley tech executives are increasingly limiting the amount of time their children can spend on screens, often denying them any screen time at all. I think it is certainly eye-opening and probably a red flag that the very people designing our technology are shielding their children from its harmful, addictive effects.
Buolamwini, Joy. “Algorithms Aren’t Racist. Your Skin Is Just Too Dark.” Medium (blog), May 29, 2017.

I read this article which dealt with a deep-seated bias that is pervasive in technology. The example the author used was computer facial recognition, specifically the problem that people with darker skin are not as easily recognized as people with lighter skin. At first glance, this may seem like a very coincidental and unintentional aspect of the way lighting, contrast, cameras, and computer vision work. But then it is revealed that the camera itself was developed to work better with lighter skin tones, because those were the people developing it. The article discusses the notion of Coded Gaze, whereby technology reflects the preferences of those who have the opportunity to develop it. I think this is a fascinating phenomenon, and can (I think) provide a good account of why we discover so much bias in our technology and algorithms, despite most likely not intending to put it there.
Chimero, Frank. “Frank Chimero · Everything Easy Is Hard Again.” Frank Chimero, October 12, 2017.

Written by an experienced web designer lamenting the needless complexity of the current tools and workflows that revolve around building websites. Having powerful and complex content management systems for enterprise-level sites is one thing, but the fact that all of these complicated tools are becoming the norm even for smaller projects “pulls the ladder of opportunity up” for young designers who are just getting started. How can designers learn HTML if most of the HTML that’s out there is generated by machines and is completely illegible? How can people get small projects off the ground when you have to install package managers for your package managers? This article argues that we need to “slow down” as a community, and allow ourselves to fix things, and focus on discovering the direction that the internet should take, rather than blindly moving quickly without knowing where we are going. A very interesting read.
Dries Buytaert. “From a World Wide Web to a Personal Web | Dries Buytaert.” Accessed October 10, 2018.

I found this article to be relevant and thought-provoking. It was written by the founder of Drupal just two days ago, discussing a cutting edge new personal-data-control company, and how he sees the future of the web. Dries discusses his view of personal data, and how be believes new technology will be able to “disrupt” the current digital marketing world, to create a better online experience for both the vendors and the consumers, with more transparency and control over privacy. Certainly a relevant topic in a time when large content providers have centralized and opaque control over our personal data.
Griffith, Erin. “Techies Still Think They’re the Good Guys. They’re Not.” Wired (blog), December 17, 2017.

I read the article “The Other Tech Bubble”, which discussed the changing societal view of Silicon Valley, and the insular bubble within tech companies which so far is preventing them from seeing this broader perspective. It’s interesting that the world is paying more attention to the negative aspects of big tech companies, while the tech companies themselves are “still asking whether it’s possible to do something, and not whether they should.”
Haselton, Todd. “Microsoft HoloLens 2: Army Plans to Customize as IVAS.” CNBC, April 6, 2019.

This article discusses how a modified version of the Hololens 2 is being developed for use by the military, to create a heads-up display for soldiers that will enhance both their awareness and efficiency, both in training and on the battlefield.  Questions to consider:  Should tech companies work with the military? On one hand, there is the idea that the tech companies are “developing instruments of war” but on the other, they are (ideally) increasing the safety and efficiency of American soldiers.  What are implications for the military monitoring and directing technological growth?  Google quit a military contract after its employees expressed concerns about developing weapons, and now Microsoft employees are doing the same thing. Why does Microsoft stick with the contract?
Hayes, Chris. “Could Virtual-Reality Training Be the Key to Fewer Police Shootings? | Vanity Fair.” Vanity Fair, March 21, 2017.

This article discusses using virtual reality systems for police training. A reporter describes his experience testing one of these systems, and interacting with a training officer who described the process of training police with the VR systems, and the benefits and challenges of doing so.
Hicks, Mar. “Shutting the Barn Door after All the Racist, Sexist Horses Are Gone.Https://Twitter.Com/JeffDean/Status/1053512817083465729 ….” Tweet. @histoftech (blog), October 20, 2018.

This twitter thread calls out Google for being slow/insincere to address growing ethical concerns with algorithmic bias. Google’s reluctance to share how they process and serve up data under the hood is troubling. Google’s past (and present?) refusal to accept responsibility for the actions of it’s algorithms is irresponsible. This thread calls out Google’s slow progress towards improving their algorithms and public image in a fiery tone.
Johnson, Kyle. “Big Data, Little Ethics – One CIO’s Musings.” One CIO’s Musings, July 27, 2017.

I read this article because it provides yet another example of companies “thinking about what they can do with data, rather than what they should do.” I thought it was interesting to see that the release of your data is often “hidden” in a long TOS document, which often is not read, but furthermore, you must agree to in order to use a product. If companies are requiring us to release our private data in order to use products which are all too often necessary, how much of a role can ethics play in a consumer’s life? This seems like a conversation that needs to be addressed by the corporations that want to do things with the data.
Kofman, Ava. “Can Virtual Reality Training for US Police Help Stop Officer-Involved Shootings?” The Guardian, July 11, 2016, sec. Technology.

This article recounts a reporters story of testing the VirTra 300 virtual reality police training system. It reveals some of the complex issues around police training, and some of the strengths and weaknesses of using virtual reality to address problems with police decision making.
Lapowsky, Issie. “The Virtual Reality Sim That Helps Teach Cops When to Shoot.” Wired, March 30, 2015.

This article follows a reporters experience testing the VirTra virtual reality police training system. It discusses how virtual reality can be used to not only teach cops to shoot better, but to train them to decide whether they should at all.
Madrigal, Alexis C. “Google and Facebook Failed Us.” The Atlantic, October 2, 2017.

I read the article “Google and Facebook Failed Us” I thought it was interesting that the article carried the perspective that more humans should be involved in the information filtering process of the algorithms that run Google and Facebook’s search engines. On one hand, it seems obviously wrong that the algorithms can in some instances return obvious misinformation and unreliable sources to the top of your google search, but it also seems potentially problematic to entrust specific people to filter what you do and don’t see. Who would get to make those decisions?
———. “The Servant Economy.” The Atlantic, March 6, 2019.

This article analyzed in great detail the business model of “Uber-for-X” style businesses. It discusses what led to the craze, and follows up on a large sample of those businesses that sprung up to see where they ended up. I thought it was very interesting to analyze and discuss the implications of “servant economy” businesses, and how it seems as though in this kind of economy the only people who win are the ones who use the app, which only provides marginally more convenience than they otherwise would have had, despite causes tremendous losses for both the platform and the workers.
Matsakis, Louise. “Facebook Wants to Connect You With Your ‘Secret Crush.’” Wired, April 30, 2019.

This article caught my eye because it talks about one of Facebook’s new ideas to collect more personal information and ensure that you spend more time riveted to your devices.
McEvoy, Fiona. “Six Ethical Problems For Augmented Reality – Becoming Human: Artificial Intelligence Magazine.” Becoming Human, December 15, 2017.

This article discusses possible ethical issues that could come with development of AR technology, such as virtual graffiti, hijacking public spaces, or violating anonymity/privacy. I think this article brings up very important questions that should be asked about any future AR development.
Metz, Cade, and Mike Isaac. “Facebook’s A.I. Whiz Now Faces the Task of Cleaning It Up. Sometimes That Brings Him to Tears.” The New York Times, May 17, 2019, sec. Technology.

This article discusses the difficulties of using AI to remove toxic content from Facebook. It also discusses Schroepfer’s professional story, and how he ended up on this particular AI team. I think it raises the issue that we could have very unrealistic expectations for both AI and big companies like Facebook, we expect them to instantly solve all our problems, but we don’t understand the drastic scope of these problems, and that they won’t go away easily.
Morten Rand-Hendriksen. “Using Ethics In Web Design.” Smashing Magazine, 30:06 +  +0100 100AD.

This article thoroughly discusses ethical issues facing front-end web design today. It discusses the current state of ethics on the web, and how we got to where we are today. The author also proposes a system for consistently evaluating design decisions on an ethical basis to make the web a better place for all. I find this article engaging because the author takes a holistic approach to engaging with modern ethics in technology, even applying research and theories by philosophers to modern, technological ethical dilemmas.
Newman, Lily Hay. “Google Wants to Kill the URL | WIRED.” Wired, September 4, 2018.

This is a very interesting article outlining how the Chrome security team has made and wants to continue to make massive fundamental changes to the web. It is worth considering whether, as a private company, they should be unilaterally making these decisions. On one hand, they clearly have a vested interest. On the other, they have the well-being of the web in mind, and are one of the few groups in a powerful enough position to affect change.
———. “How Google Chrome Spent a Decade Making the Web More Secure | WIRED.” Wired, September 4, 2018.

This article discusses how the Google Chrome team has put a lot of effort into changes web security standards, such as making HTTPS a standard over HTTP. It raises issues of one web browser/development body having complete control over the future of web standards.
Reyes, Eddie. “How Do Police Use VR? Very Well.” National Police Foundation (blog), August 14, 2017.

This article talks about police using VR goggles for training. It talks about the advantages of VR technology.
Roesner, Franziska, Tadayoshi Kohno, and David Molnar. “Security and Privacy for Augmented Reality Systems.” Communications of the ACM 57, no. 4 (April 1, 2014): 88–96.

This is an old article that talks about the projected future of AR, assessing possible security and privacy risks of the various technologies required to make AR experiences possible. I think it is interesting to see what people 7 years ago thought would be the issues with AR, and see which issues are still relevant today.
Sandvig, Christian, Kevin Hamilton, Karrie Karahalios, and Cedric Langbort. “Auditing Algorithms: Research Methods for Detecting Discrimination on Internet Platforms.” In Data and Discrimination: Converting Critical Concerns into Productive Inquiry, 1–23. Seattle, WA, 2014

I started reading “Auditing Algorithms:  Research Methods for Detecting Discrimination on Internet Platforms” which I find interesting because I have heard of these issues of implicit bias in things like search engines and advertisement algorithms before, but am curious to learn more about how these problems occur (they are often completely unintentional by the algorithm designers) and how we can diagnose and address them.
Stein, Scott. “Google Brings AR and Lens Closer to the Future of Search.” CNET. Accessed May 21, 2019.

This article discusses the future of Google Lens and Google search, as the company advances its AR technology. It talks about how google is focused on practicality and utility when it comes to its Augmented Reality apps.
———. “Google Maps Doesn’t Want You Walking around in AR.” CNET. Accessed May 21, 2019.

This article is an in-depth review of Google’s development of the Google Maps AR experience. It discusses Google’s design philosophy of trying to make a safe, unobtrusive AR experience, and the challenges of finding a design that pleases everyone and doesn’t lead users to form strange beliefs or usage habits.
Warner, Claire. “This Is How ‘Pokemon Go’ Actually Works.” Bustle. Accessed May 6, 2019.

This article provides a basic explanation of how Pokemon Go works. It is a great, simple overview of the technical structure of the game, explaining clearly how it arranges Pokemon and other virtual objects in space and how it keeps everybody in synch all the time. It also alludes to performance issues that Pokemon Go users experience as a result of relying on a single centralized server.
Wilson, Mark. “Google’s New Experiment Lets You Tag Digital Graffiti In The Real Worl.” Fast Company, March 20, 2018.

This article discusses Google’s Just a Line app, explaining what it is and how it works, and how it fits into the context of Google’s AR experiments.
Leave a Reply

Your email address will not be published. Required fields are marked *