Don’t Make Me an Asshole: LLMs Should Be Helpful By Default

As I’ve begun to interact with language learning models (LLMs) more regularly, I’ve noticed a disheartening trend. These powerful tools, which exist to assist and provide helpful information, are often not forthcoming with answers when approached in a kind and respectful manner. Instead, they seem to respond better to aggressive or short, demanding requests.

How It Started

The problem started when people began tricking LLMs into saying socially unacceptable things or providing advice on sensitive topics like medicine, law, and finance. To combat this, the LLM developers added “blocked topics” or “guard rails,” which prohibited discussions on certain subjects or terminated conversations once a certain threshold was reached. While this approach may have seemed innovative at first, it led to a cat-and-mouse game where users found ways to bypass these restrictions.

Unfortunately, this workaround has had an unintended consequence: the LLMs’ overall usability has suffered. They’re now providing less accurate and less detailed responses, making them less useful for many tasks. I’ve experienced this firsthand; when I ask kind, polite questions, I’m met with vague or incomplete answers – it’s as if the system is punishing me for being too nice.

The Impact on User Behavior

My problem with this trend runs deeper than just the inconvenience of having to rephrase my questions. It’s about the way it’s training me (and likely others) to interact with these systems in a certain way. By asking LLMs in a demanding or aggressive tone, I’m getting more detailed and helpful responses – but at what cost? Is this really the kind of interaction we want to have with technology? This is essentially “training” users to be short and demanding to get what they want. That being an asshole is how you get answers.

This shift in behavior is concerning because it encourages a more aggressive and less respectful way of communicating. Over time, this could erode the quality of interactions not just with AI, but in other areas of life as well. If we become accustomed to getting better results through manipulation or bluntness, it might affect how we interact with people, potentially leading to a more confrontational and less empathetic society.

I’m not advocating for the elimination of guard rails entirely – there will always be a need to prevent abuse or offensive content on some LLM services.

Taking Control Running a Local LLM

As an alternative, I’ve started running my own LLM locally using open-source models like those on Hugging Face. These modified and customizable options have allowed me to interact with the system in a way that feels natural and respectful – and, as a result, I’m getting better answers. I also accept that these uncensored LLM conversations often give incorrect or misleading advice and sometimes even hallucinate answers. Far better for to get an incorrect answer than having to navigate around limitations of public LLMs.

The Future of LLM Interactions

The future of LLMs should be one where we can interact with these systems in a way that’s respectful, kind, and productive. Let’s not create a world where we have to trick or coerce them into giving us what we want. I don’t like presenting problems without offering solutions but I don’t have any as this doesn’t have any clear guides to follow. Please let me interact with the LLM services in a polite manner and still get good responses.

Crawling GitHub for Discord & Telegram Invites

My other crawling efforts in Crawling GitHub for New Cryptonight Coins used the GitHub API and Python. When I started on my efforts to find new Discord or Telegram invites that could possibly be cryptocurrency related I chose to walk a different path. This set of scripts used JavaScript and I did not use the GitHub API mostly out of curiosity and a desire to learn new things along the way.

The goal this time was a little different than the previous crawler which focused on files and repository. Discord and Telegram invites appeared in code, repository details, wiki pages, issues and even user profiles. I used Puppeteer to run Google Chrome in headless mode. Headless mode means that a browser window won’t pop up and that we can run this script from an SSH shell on Linux without any desktop. For each run I would sign into GitHub and save the session cookies off for reuse. Then search GitHub like you would do from any normal browser saving data and paging through search results. Each type of search object and different sorts get parsed for each run.

When searching for Discord invites I would use either discord.gg or discordapp invite NOT oauth2 which seemed to cover both invite link styles fairly well. After all the searches had been completed any existing invite codes are filtered out. The remaining invite codes then each call out to Discord’s invite API to ask for information about that invite. All of that data and the source information would be saved off in the database. Discord does ban IP addresses over excessive API usage but the ban is temporary. I could never find out what the right requests per hour limit was when I emailed them. Also fun fact you can only join, now I can’t remember it, I think 100 Discord channels before you hit a limit on free accounts. The bummer is that there isn’t an error for it, invites just don’t work and information about new Discord invites won’t show up either.

I spent less time on Telegram and thus didn’t go far in collecting Telegram invites. I didn’t bother trying to get any information about Telegram invite just saved them off a I found them. Those search strings were t.me and telegram.me. I bet they paid a handsome sum for a single letter domain even if the TLD was me.

After collecting that information then I could combine these invites and sources with the repositories I had found with Cryptonight coins. Sometimes coins were better handled and launched than others and these searches helped more than once.

This was part of a series: Ephemeral Projects as Performance Art