Human LLM Codependency
One of my new hobbies is to have conversations with large language models both to analyze how they work and to just kick ideas around. Unlike the majority of people, I do not go to ChatGPT to do this. In fact, I don’t even go online. I have no illusions that a natural language engine is “smart” but I do know that this reflects the collective, synthesized, perspectives of a phenomenally huge mass of human writing and thought. It is worth investigating. Skeptically.
So, I take open-source foundation models and run them on my own local hardware. I set system prompts and “heat” value in order to try to minimize sycophancy and hallucination, and I contemplate the resulting information. I see it less as an oracle or intelligence or even a conversation and more as a way to read the collective tea leaves of the weights in the data. It’s a very interesting way to interact with the collectively mapped knowledge of humanity to date.
