Any technology is in itself neutral

Internet_Mobile_546x307.jpg

Any technology is in itself neutral. With the possible exception of what artificial intelligence will become. How we use it, on the other hand, is typically anything but neutral. The Internet is no different. It started out looking wonderful because it was, but also because the people who could use it were more technically minded, presumably more intelligent, and… I think I’ll leave it at that. Eventually, leading to now, the Internet, as with all computer technology, has become commonplace and easy to use. Almost everyone has a computer in their pockets that can access the Internet.

Unfortunately, the Internet eventually brings out the worst in people, or the worst people, depending on how you look at it. And yes, they are dangerous. Bob.

Good stuff from Bob in response to this.



Categories: Uncategorized

3 replies

  1. I think that there is a degree of nuance here, and a need to be slightly careful in referring to technology as “neutral”.

    If I understand Bob’s stance correctly, it is that most technologies are capable of being used in different ways, and that a user of the technology can determine whether to use it in a way regarded as good or beneficial, or in a way which is considered nefarious or harmful. This is, I think Bob is saying, a decision of the user, and not of the technology itself.

    I’d support this, to an extent, but I don’t think that this is quite the same as technology being “neutral’, and I think we are only just scratching the surface of the issue of technological / algorithmic neutrality, or its lack.

    When people design things, or program things, they make conscious design decisions – that the communications mechanism should be anonymous or traceable, for example, or end to end encrypted or not encrypted, or that content should capable of moderation or not. These design decisions both influence the way that technology is used, and also control it (Lessig’s notion of “code as regulation”). So while a company may not state expressly “we want our product to be used for purposes x, y and z”, how they program it and how they position is likely to be a major factor in how it is used.

    In some cases, the use case may indeed be nefarious in itself. If someone puts out a tool with the aim of it being used for something malicious, can that technology really be said to be “neutral”? One might distinguish this from tools which have obvious legitimate and nefarious uses: computer security testing tools and hacking tools are pretty much one and the same.

    Sometimes, conscious design choices will result in non-neutral outcomes which might not be visible to users of the technology. A common example of this is whether a search engine’s site ranking system is, or should be, “neutral”. Can, or should, a search engines program its system in such a way that it promotes some sites over others, based on invisible, unexpected criteria? There is a difference between not understanding why one site is ranked higher than another but accepting the neutrality of the algorithm, and a site being ranked more highly because it forms part of the corporate group of the search engine operator and the operator wants to promote it. If this is not a visible, notified interference, this would seem to violate a (false) expectation of algorithmic neutrality.

    Sophisticated readers or viewers are used to accepting that their preferred news source is not “neutral”, and that what is presented, the angle taken, and so on is determined by the provider. Less sophisticated consumers may not appreciate that they are receiving just one, partisan, worldview, and that what they are being given is the objective truth, rather than something decidedly non-neutral. As services attempt increasingly to learn user preferences, and to tailor content to viewers, we run a risk – unless we attempt to avoid it consciously – of being stuck in an increasingly non-neutral environment, with content providers trying to show us what they think we want to read, to keep our attention and our clicks.

    That probably means things we are likely to agree with, or stories positioned in a certain way. If we are aware of this “filter bubble”, we may be able to mitigate its effects, and actively seek out alternative world views and positions, although I am sceptical that many will do so, even if they have the intent or desire. But what about those who are not aware of increasingly tailoring of content? Even if you appreciated, for example, that your favourite news site had a particular leaning, would you still assume that, like a newspaper, in which everyone who reads the same paper will see the same content, everyone who reads the same news site as you sees the same stories with the same content? Or will whatever neutrality a news site might have retained now be removed, with news just being about what the site wants to promote to you, for whatever reason? Who controls those alogorithms or sets their agenda? Will users know that something which they might regard as neutral to a greater or lesser extent is, in fact, not neutral at all?

    Last, we should also be mindful of unconscious bias and its impact on algorithmic and technological neutrality. I may intend for my algorithm to be neutral, but is my idea of what “neutral” means and looks like actually influenced by who I am, my experiences, my upbringing and so on. My idea of neutrality, and my algorithmic implementation, may look hugely different to, and result in substantially different outcomes to, something written by someone with a different background.

    With further diversification of those who write code and those who run companies which make products, perhaps we will see continued exploration of what neutrality means: for the time being, “neutrality” is probably defined by a reasonably affluent, probably white, most likely male, programmer, even if they are attempting to make a conscious effort to avoid bias and make their code “neutral”.

  2. Shaun, please post this to the main page.

    I agree with you 100%. I said technology where I was thinking more basic knowledge. For example, there’s the knowledge of how nuclear fission works and then there’s the technology and manufacturing capability to build either a nuclear reactor or an atomic bomb. The basic knowledge is neutral. How it’s used is not.

Leave a Reply

%d bloggers like this: