AI Use

The posts on this page will frequently use language models to generate the majority of its content. This means that they will reflect ideas and concepts that I may not personally understand. Is that too radical? Well, we have to start somewhere. I will in some instances edit the articles, or splice multiple outputs.

So, my big disclaimer is, it’s AI: you can’t ever trust an AI 100%.
AIs are probabilistic models, and will have errors.
That doesn’t mean that they aren’t useful.
But this is the disclaimer portion.



Here’s what Google’s Gemini has to say about generating text, with this as the input prompt, “Write a quote on the use of AI models for generating language content. Make it inspirational.”

AI, a symphony of algorithms, unlocks the boundless potential of human expression, transforming the way we craft narratives, explore ideas, and connect with the world.


Gemini analyzed what it wrote, stating that the above statement:

Thanks Gemini. As an AI, you aren’t as demure as humans and can’t really be described as having a ‘holier-than-thou’ attitude.

I then asked Gemini to, “Make the quote funnier, but don’t analyze it.” It returned this:

Finally, a writing partner who never complains about your terrible puns or judges your existential dread.


I asked Gemini for reasons to use language models, and a partial response was:

Research and Analysis Catalyst:

  • Summarize research papers and news articles: Quickly grasp the key findings and insights.
  • Analyze large datasets and identify trends: Uncover hidden patterns and make data-driven decisions.
  • Generate research hypotheses: Explore new avenues of inquiry and spark innovative ideas.

I asked it to tell me why I should fear language models. It gave back this list:

  • Bias and Discrimination: LMs are trained on massive datasets that often reflect existing societal biases. This can lead to biased outputs, perpetuating stereotypes and discrimination in areas like gender, race, and religion.
  • Misinformation and Manipulation: LMs can be used to generate convincing but false information, including fake news articles, deepfakes, and misleading social media posts. This can be used to manipulate public opinion and spread harmful propaganda.
  • Job Displacement: As LMs become more sophisticated, they could automate tasks currently performed by humans, such as content creation, customer service, and even certain types of research. This could lead to job losses and economic disruption.
  • Lack of Transparency and Control: The inner workings of many LMs are complex and opaque. This lack of transparency makes it difficult to understand how they make decisions and can lead to unintended consequences. Additionally, the potential for misuse by malicious actors raises concerns about control and regulation.
  • Dependence and Deskilling: Over-reliance on LMs could lead to a decline in critical thinking and problem-solving skills in humans. As we become more accustomed to receiving information and completing tasks with the help of AI, we may become less capable of doing these things independently.

That was a pretty long response so I asked it to explain language models like my name is “Grog, the caveman.” It replied with,

See that big pile of rocks? Those are stories. We use magic fire to make the rocks talk.
The talking rocks can help us.
Like finding yummy berries or making a strong cave. But sometimes the rocks lie.
We must be smart, Grog.


Be smart. I asked google how to be smart.
Its advice was to, “Embrace challenges.”

I like that.

Be smart. Embrace challenges.