← Home About Archive Photos Replies The Point Engineering Sea and Shore Also on Micro.blog
  • I installed a trojan to my car

    I don’t think I’ve heard the term trojan with reference to computer viruses for an age. I presume ‘malware’ is what everyone calls those virus delivery mechanisms these days.

    In this case, though, I think trojan is the better term, because the latest update to the software running my Tesla Model Y has left me feeling uneasy about this “gift” from Musk’s increasingly dystopian empire.

    To be clear, although we did buy the car “before Elon went crazy”, even then we felt the need to search for alternatives. However, it’s fair to say that, in those days before the fascist salutes, governmental hacking and the overall descent into crush-the-libs madness, the Model Y was genuinely the best car for our family needs. The Y still gets regular software updates which have in general been good. Increasingly, though, they seem to include a fair bit of spammy junk. Did we really need a “Tron-Mode” to tie in with what was by all accounts a really bad film? Admittedly, Santa-Mode was witty for one drive, but now it’s just sitting there, gumming up the code as useless cruft.

    And now we have, with the most recent update, the unwanted, stupid gremlin of Musk’s AI bot Grok in our car and I want it gone. I had hoped, when I first heard of its impending introduction as a beta to the Tesla software, that the German regulators would put a hold on its introduction. To no avail, clearly. I had also hoped (like many others) that I would be able to uninstall it like some daft game (which also can’t be uninstalled). But no, I am now permanently a long-press on the right scroll-button away from activating this nuisance. It happened for the first time over the weekend, when I wanted to open the glove compartment (which has no handle…) for my wife: up popped a dark, evil-looking info-panel with the dreaded swoopy, gloopy melting G icon of Grok. Taken aback (whilst driving!) I nevertheless intoned the (frankly silly) German incantation of “Handschuhfach öffnen” - whereupon Grok proceeded to tell me how to open the doors in various ways.

    It got me nigh-on shrieking at the car to shut the whatever word you would like to imagine here up - which of course it didn’t, until it was finished. What also freaked me out was that, whilst it was talking and I was driving, the UI was also asking me to create a Grok account, which I refuse to do. I then noticed a small text mentioning that a long press would activate Grok - and a short press got me back to the simple voice commands for something that really shouldn’t need a voice command (don’t get me started on turning air recirculation on and off via touchscreen or voice command!) For Tesla boosters, that’s job done, the best of all possible worlds.

    For me, yes, it’s terrible stuff. But, to return to the point of this post, I fear that this “beta” is a trojan horse. Not that it’s delivering malware as such (with Grok, that’s debatable), but that Tesla / SpaceXAIX or whoever they are now will slowly push and grow this malevolance to take over more and more of the software.

    All I can hope for is that regulators (and, maybe, customers - oh, who am I kidding) push back against any such creep. If that means different versions of the software between MAGAland and the EU, then that’s more than fine by me. I don’t want to feel forced to sell the car (unusually, we bought it rather than leased it), and we’re still planning on keeping it longer term, but that’s dependent on Tesla playing by at least some enforced rules - and on the other hand anybody wanting to buy one at all a few years down the line…

    → 8:25 PM, Mar 3
  • Languages, Limits and Meanings

    Another happenstance connection of links after the previous one on vast numbers. I read this article on the Large Language Mistake of LLMs, in which the author, Benjamin Riley, posits that language does not engender intelligence; and then the essay from Luciano Floridi On Saussure and Heidegger about language, in which he compares Ferdinand de Saussure’s conclusion that no language has closer access to reality than any other, in direct contrast to the wishful thinking of Heidegger that German is the ultimate discoverer of reality.

    Basically, then, LLMs reflect neither intelligence nor reality…

    → 9:20 PM, Nov 27
  • AI in universities - a nuanced view

    A nice article in the Guardian from a student about how she sees the use of AI in universities from the student perspective. The whole university system, as well as AI tech, is still uncertain following COVID and other ructions. A good counterpoint to the professor’s view that I mentioned in my previous post on AI.

    Her perspective as to the actual use of AI matches mine - as a research assistant more than a substitute writer - but, then, she would say that in public, on the Guardian, wouldn’t she…?!

    (meaning, to be sure, that the author of a Guardian article would be self-selecting to be that type of person)

    → 5:21 PM, Jun 29
  • AI and me: a state of affairs in June 2025

    I’ve long struggled with an inability to get AI. Whether that’s to “get into”, to “get along with”, or simply to understand it, I’ve not found it to have been as appealing as many others have made it out to be.

    There are situations where its use is unavoidable, but - and this is where it shines - it’s also not shouting “THIS IS AN AI TOOL!” This is mostly situations such as translation services like DeepL or Google Translate, which all use AI / machine learning to re-bolt together sense in the translated language. I have also started using search services (Perplexity, Kagi Assistant) for researching projects, where the assistant can propose answers and (for me, critically) citations: like when I’m delving into specific, niche topics like brass playing and the embouchure.

    There’s a good example here of how we need to work with and apply our scepticism to the profferings of AI assistants. There’s a long-held belief that brass players buzz their lips to make sound. This isn’t actually true: as concert trombonist Christian Lindberg pointed out on his YouTube video from 2015, when you remove the trombone from the mouthpiece whilst playing a note, the player ends up just blowing air through the mouthpiece.

    This is backed up by considering the basic physics of a wind instrument: we’re setting up standing waves at the desired frequencies within the pipe. Our lips are actually vibrating sympathetically to those waves (we can also start notes just by blowing into the instrument without tounging - it just starts sounding, like an organ pipe). Now, of course, our embouchures are doing an awful lot still, and we have to work those muscles hard to keep those note-playing systems stable. But each query I made via AI agents led to flat assertions that brass players buzz, and sometimes, hidden in the longer summaries, perhaps some cursory comments along the lines of standing waves.

    It’s important to realise that the AI has no understanding of what it’s producing! This means that it doesn’t check for internal or external inconsistencies. The human who requested the research needs to work to understand what has been produced by the AI and why - for correctness, or for some notion of popularity or frequency on the web?

    Where all this came from

    Whilst my daughter was having her teeth looked at, I was in the dentist’s waiting room reading an article by Brian Klaas on his blog The Garden of Forking Paths on The Death of the Student Essay - and the Future of Cognition and was struck by three passages that I’d like to quote. The first references something that I’ve not mentioned yet: using AI to write articles on a specific subject for us. Klaas talks about how we (most pertinently, his students) can give too much credence to the results from an AI query under…

    … the mistaken belief that the spat out string of words in a reasonable order is the only goal, when it’s often the cognitive act of producing the string of words that matters most.

    This gets to the heart of my unwillingness to use AI as a writer, or a “copilot”, at all. Firstly, I dislike the loss of agency and possession. AI-generated results neither sound like me, nor do they feel “valuable” to me, since they take away the hard but nevertheless rewarding work of generating understanding for myself, the author.

    He continues later:

    If you want to know what you think about a topic, write about it. Writing has a way of ruthlessly exposing unclear thoughts and imprecision.

    Others may (or may not) read about it, but what all this writing is really doing is creating links and then conversation points for the next people in our communities. So, using AI to perform research for us and generate some thought-starters can be legitimate, but we have then to translate these into our own thoughts and words to know that we have have internalised them.

    Another Klaas quote from the same article touched a nerve:

    we’re engaged in a rather large, depressingly inept social experiment of downloading endless knowledge while offloading intelligence to machines.

    This a a key foible of mine, and led directly to this post. I’m terrible for saving links of interesting things or making notes, collecting them all in some bookmarking or note-taking app - never to be made use of at all.

    So that’s what this post is about, really: the challenge to myself that if I read something that I want to capture, then I should write about it - for others, in public. I’m under no illusions: nobody reads this blog, but its words will be out in the open and people could potentially - AI crawler bots will - stumble upon it, so I should be able to stand by my words (or write amending posts on them later, if called to do so). So I intend to do more “link-and-quote-posting”, with commentary, similarly to John Gruber on his Daring Fireball blog. He usually adds at least a sentence of his own to each quote, which, on aggregate, is a fine way of understanding who the person (or, at least, the persona) behind the blog is, what they think and what, overall, they stand for.

    Where does search end and AI begin?

    As I mentioned at the beginning, I do increasingly use AI for (re)search, and I feel the boundaries between classical “Googling” and asking Perplexity and co for answers plus citations blurring ever more. This is the research assistant role that I (similarly to Reid Hoffman, LinkedIn founder and AI investor) treat AI as. It’s not an oracle, but it can be an exceedingly productive snuffler, rooting things out like a young or ineffective wild boar, unsure as to whether what it unearths is fit for consumption

    Where does this post end?

    Right about here. But the thinking and - hopefully understanding - doesn’t!

    → 9:53 PM, Jun 20
  • RSS
  • JSON Feed
  • Micro.blog