Nothing wrong with coding the world as it is

We must regulate artificial intelligence because – well, according to the Fabians we must regulate artificial intelligence because. Their argument being that it produces results they don’t like so they must get to regulate it to produce results that they do like.

This is all dressed up in the usual feminist arguments. It’s largely men who do the AI coding, largely white men to boot. Therefore that coding will encapsulate their innate prejudices and this would be bad. They do mention the idea that perhaps women could be doing more of the coding then reject it on, I assume, the grounds that math is hard. Instead the insistence becomes that women – more specifically, women who share the Fabian outlook – should be able to tell those white men what to code into AIs.

This is all backed up by this example of how current AI is wrong:

What if the workforce designing those algorithms is male-dominated? This is the first major problem: the lack of female scientists and, even worse, the lack of true intersectional thinking behind the creation of algorithms.

Examples of bias were reported by the Guardian a few years back, showing that searching Google for the phrase “unprofessional hairstyles for work” led to images of mainly black women with natural hair, while searching for “professional hairstyles” offered pictures of coiffed white women.

The thing is there’s nothing wrong with that coding. Sure, it might not reflect the society we’d like to have but it’s a pretty good representation of the one we do have. Vast afros are thought to be unprofessional for work, a tight coiled bun is thought to be more so.

No, stop, do not start thinking about how that’s not the way it should be. I agree entirely that it shouldn’t. But is that a useful outline of how current society works? Yep, it is. So, those white male AI coders haven’t made some dreadful mistake there, we’ve got a reasonable approximation to how our society does work today.

It’s entirely true that if one or other line of thinking, political or social philosophy, manages to gain control of the AI coding then the AIs will work in different ways. They might even reflect the society wished for by that philosophy. They’ll also not be very useful and will therefore die off as they’ll not reflect nor manage to work in the reality we inhabit. They’ll die because we’re never going to give absolute power over all AIs to just the one group and therefore those that don’t work will die as they are outcompeted.

The Fabian demand is that AIs should be built to reflect their desired world. Which has two problems, the first being that not all share their desires. The second and much more important one being that AIs designed to their desired world won’t work in this extant world – an AI that doesn’t work not being all that useful.

Support Continental Telegraph Donate


  1. The Fabians were very taken with this bit of Omar Khayyam:

    Ah Love! could thou and I with Fate conspire
    To grasp this sorry Scheme of Things entire,
    Would not we shatter it to bits — and then
    Re-mould it nearer to the Heart’s Desire!

    Except they allow no disagreement and all attempts to build the better scheme of things elsewhere and getting folks to voluntarily switch have failed, miserably.

  2. Regulating AI to avoid promulgation of a “male viewpoint” is being done at the same time that many universities are disparaging as male the concepts of scientific method, study of Western Civ, individual accountability, algebra, and so on. The new call for censorship won’t just break AI but will further empower the worthless.

  3. I’m pretty sure that Google’s Image search isn’t even using much “AI”. Follow any of the “professional hair for work” and you find pages where the title of the page includes “professional” and “hair”. Then someone probably linked to it from another site, boosting the ranking.

    The problem with Guardian types is that what’s actually going on there, with both algorithms like PageRank, and with AI, is that they’re actually more democratic than the Guardian are. The whole attraction of analysing data is that it avoids human bias. It’s why Tesco and Wal-Mart crushed their opponents for a while – they created massive data warehouses with terabytes of data telling them all about how customers behaved and then analysed it to serve them better. Nappies+beer has been twisted, but it’s a parable of how large companies learn things that no human would have spotted through assumptions.

    • The problem is that the data is racist – at least to the extent that the data does not present the world in the way that the Social Justice Warriors would like. IQ is the prime example. But another one would be the App that told you if you were going to a dangerous neighbourhood or not – very useful for tourists one would think.

      But of course what it did in effect was tell you if the area of interest had a lot of Black people.

      The truth just is sexist and racist. There is no way around it. This is what the data says. We all have to engage in endless management of reality a la 1984 to survive.

  4. The discrepancy in the perceived professionalism of female workers with different hair styles is nothing to do with race. It is probably to do with whether females with neat hairstyles or 1 metre wide afros are more likely to drop a hair in your sammich or coffee when they’re making it for you to eat or drink or to get their hair tangled in filing cabinets or keyboards.

  5. In the example given the results are fuck all to do with the coding of the algorithm, they come from the training set fed to the algorithms. Algorithms are no more male or female than gravity is.

    • That’s how I understand it. I could be incorrect but don’t the coders build some neural network and then feed it info to learn for itself?

      Any prejudices will come from the data fed in not the programmer.

  6. The Fabians –those friends of eugenics and advocates of mass-murder –are scum.

    They likely fear that AI might create something evil.

    AI would have to go some to create anything as bad as the fucking Fabians themselves.

  7. AI is largely about very sophisticated pattern matching and uses statistical methods in many cases. For instance you devise a model then train it using some input data. For instance you can provide it with a bunch of photos of face and label them by gender. After a while, it will be able to look at a photo and identify its gender with high accuracy.

    But that’s barely scratching the surface of what it can do. The spy cameras dotted around the UK’s road are very good at reading number plates thanks to AI. Likewise digital speed cameras. Motion sensors can distinguish between a dog and a human.

    But these decisions are made by identifying patterns and are not programmed by programmers. I can’t even believe that gender bias could be programmed in to the code. You might as well says its witchcraft!

    • The problem is that a general pattern matcher can often find the wrong pattern.

      Say a university researcher develops an AI to look at people’s academic records and figure out which are going to be the high-flyers. They train is by feeding in the academic records of a lot of young researchers, and how they’re doing 20 years later, and identify hidden patterns. The thing achieves a 80% success rate. No more fallible humans doing interviews! Get an AI to do it objectively!

      The somebody does a few experiments, and finds that almost all of the record has no effect on the output. The only thing it responds to is the political allegiance – conservatives are automatically rejected, because virtually nobody who reaches the top in academia is conservative.

      The AI is doing exactly what it was designed to do – finding predictive patterns. If you want to bet on who is likely to succeed in the human world that actually exists, it’s doing exactly what you want. If you want to know who you’re supposed to give a job in order to make the most scientific progress, not so much.

      Quite often, the researchers who develop it have *no idea* what an AI is looking at to make its decision. But it identifies 96% of hidden paedophiles correctly by looking at a photo of their face, so who cares? Let’s use it! Or if we can’t get the courts and judges to use it, we can at least sell a mobile phone App based on it. What could possibly go wrong?

  8. As other’s have said, there’s no intelligence here its just classic garbage in *the training data), garbage out.

    Here a very good video of how neural networks works and how they are trained. It take the simple case of number recognition and is aimed at the non techie:

    Check out his Bitcoin explanation as well, its the best one I’ve come across: