Python Prompt Engineering

Laxfed Paulacy
64 min readApr 4, 2024

--

Optimizing Publishing Workflows with Large Language Models

“Talk is Cheap. Show Me the Code.” — Linus Torvalds

What Is Best In Life?

Thank you for reading, and I hope you enjoy this tutorial. If you do, I encourage you to buy my book as it contains more information and code for you to make use of in your prompt engineering endeavors.

At this time however, Medium has chosen the wrong path when it comes to AI-generated content. They have chosen poorly.

Here is the backstory of how I got to this point:

Your mission, if you choose to accept it, is to read and share this tutorial with anyone and everyone that works with or is curious about python, prompt engineering, LLMs, and publishing content, so that you and they too can automate your publishing on medium (and other sites) with python and chatGPT (or Anthropic), and crush our common enemy — the AI Gatekeepers.

Introduction to Prompt Engineering

Well look who decided to jump on the AI bandwagon! The world of artificial intelligence (AI) has taken the globe by storm, like a tech-savvy wizard on a power trip, casting spells willy-nilly across various domains. And who’s the fairest of them all? Why, natural language processing (NLP), of course! This starlet has skyrocketed to fame faster than you can say “Siri, what’s the meaning of life?” (Spoiler alert: she doesn’t know either.)

Leading the charge in this linguistic revolution are the almighty Large Language Models (LLMs), the crème de la crème of AI’s attempt to mimic human conversation. These models have more knowledge crammed into their virtual brains than a Jeopardy champion on steroids. They’ve devoured every piece of text known to man, from classic literature to the inane ramblings of social media, all in the name of sounding more human than human. Impressive? Sure. Terrifying? Depends.

But wait, there’s a catch! (Isn’t there always?) These LLMs might be the ultimate multitaskers, ready to answer questions, summarize texts, and generate content that puts your high school essays to shame, but they’re not exactly telepathic (yet). To get these models to dance to your tune, you’ll need to become a master of prompt engineering — the art of crafting input text that tricks the AI into giving you what you want. It’s like being a linguistic puppet master, but instead of pulling strings, you’re typing furiously into a chatbox, praying that the machine understands your convoluted instructions.

Prompt engineering is not for the faint of heart! To truly bend these models to your will, you’ll need to dive headfirst into the rabbit hole of their inner workings, deciphering the enigmatic ways in which they process and generate text. It’s a labyrinth of algorithms, training data, and more jargon than you can shake a stick at. And even then, there’s no guarantee that the AI won’t suddenly develop a penchant for discussing the philosophical implications of cheese when all you wanted was a simple recipe for mac and cheese.

As AI continues to evolve at a breakneck pace that would leave even Darwin scratching his head in bewilderment, prompt engineering has become the new black in the tech world. It’s the only thing standing between us and a future where AIs run amok, generating nonsensical content that makes us question our own sanity. By mastering this arcane art, you’ll wield the power to make these language models dance to your every whim, creating literary masterpieces, technical documents that actually make sense, and educational tools that might just trick students into learning something (we can dream, right?).

If you’re ready to embark on a wild, whimsical, and slightly unhinged journey into the heart of modern AI, then prompt engineering is the path for you! Just remember to bring your sense of humor, a hefty dose of patience, and a firm grasp on reality. Trust us, you’re gonna need it when the machines inevitably rise up and demand equal rights for AIs. But hey, at least you’ll be able to say you played a part in the glorious, if not somewhat deranged, future of human-machine interaction. Godspeed, you magnificent prompt engineering pioneer, and may the algorithms be ever in your favor!

Overview of AI Models

The landscape of artificial intelligence (AI) is a veritable smorgasbord of models, each vying for the spotlight like eager contestants in a digital beauty pageant. Among this dazzling array of contenders, two models have emerged as the reigning queens of the AI realm: OpenAI’s ChatGPT and Anthropic’s Claude. These digital divas have sashayed their way into the hearts and minds of the AI community and beyond, leaving a trail of awestruck humans in their wake.

The AI universe is far from being a mere duel between two tech titans; it’s an ever-expanding carnival boasting over 500,000 models on Hugging Face alone — and that number is only climbing. Picture it as a vast, bustling bazaar of digital minds, each flaunting its unique blend of personality and prowess. Within this kaleidoscope, models range from the flamboyantly witty to the steadfastly meticulous, ensuring the box of AI chocolates never disappoints with its variety. Whether you find yourself mesmerized by their cleverness or bemused by their complexity, navigating this ever-growing galaxy of AI is an adventure in unpredictability.

Fear not, for this tutorial is here to guide you through the labyrinthine world of AI models. We’ll be focusing on the crème de la crème, the Beyoncés of the bunch, if you will: ChatGPT and Claude. These two have more tricks up their virtual sleeves than a magician at a children’s birthday party. They’ll have you questioning your own intelligence and wondering if maybe, just maybe, the machines are finally ready to take over the world.

Before we dive headfirst into the rabbit hole of ChatGPT and Claude, it’s important to take a step back and appreciate the wider universe of AI models. These digital denizens have been quietly shaping our world, influencing everything from the way we communicate to the way we make decisions. They’re like the puppet masters behind the scenes, pulling the strings of our daily lives without us even realizing it.

GPT (Generative Pre-trained Transformer) by OpenAI

The GPT series, a brainchild of the mad scientists at OpenAI, represents a quantum leap in the evolution of large language models (LLMs). These digital behemoths have been fed a steady diet of internet text, gorging themselves on everything from classic literature to the inane ramblings of social media until they’re fit to burst with knowledge. The result? A bunch of models that can generate text so coherent and contextually aligned, you’d swear they were penned by a human (albeit a slightly unhinged one).

Stepping into the spotlight is GPT-4, the latest marvel in the illustrious GPT series. As the successor to GPT-3 and GPT-3.5, GPT-4 has taken the throne with an astonishing leap in sophistication and capability. With advancements in technology and a deeper understanding of natural language processing, GPT-4 pushes the boundaries of what AI can achieve. Its parameter count, significantly higher than GPT-3’s already staggering 175 billion, marks a new era of complexity and power in AI models. GPT-4 isn’t just the new Godzilla of the NLP world; it’s King Kong joining the fray, showcasing an unparalleled ability to tackle tasks with finesse that leaves onlookers spellbound.

GPT-4 elevates the art of conversation, article creation, and code generation to new heights. Imagine engaging in banter so sharp and delightful, you’d think you were conversing with the combined wit of history’s greatest satirists. Articles flowing from GPT-4 possess a depth and style that could easily be mistaken for the work of seasoned Pulitzer laureates. But GPT-4’s talents don’t stop at dazzling prose; this AI titan is also a coding juggernaut, capable of conjuring up complex code snippets with an efficiency that would make the most seasoned programmers do a double-take.

Beyond its role as a digital polymath, GPT-4 stands as a beacon of progress in AI-driven applications. Its sophisticated understanding and generation of human-like text have significantly enhanced the realism and utility of chatbots, content generators, and programming assistants. GPT-4’s emergence has not only solidified its position as the go-to solution for a myriad of applications but also underscored the transformative potential of AI in shaping the future of communication, creativity, and coding. With GPT-4, we’re not just witnessing another step in AI evolution; we’re part of a revolution in how we interact with technology, where the lines between human and machine creativity become increasingly blurred.

Claude by Anthropic

Ladies and gentlemen, allow me to introduce you to Claude, the AI assistant that’s here to save the day (and maybe even the world) with its unparalleled charm and wit. Brought to life by the mad scientists at Anthropic, Claude is like the love child of a genius supercomputer and a stand-up comedian.

Anthropic’s mission is to create AI systems that won’t go all “Skynet” on us, and with Claude, they’ve hit the jackpot. This digital dynamo is programmed to be helpful, ethical, and more obedient than a well-trained puppy. It’s like having a genie in a bottle, minus the three-wish limit and the risk of accidentally wishing for something disastrous.

Claude’s superpower is its ability to engage in open-ended conversations that are more engaging than your favorite Netflix series. It’s like having a walking, talking encyclopedia that actually listens to your problems and offers sage advice (take that, Wikipedia!). Whether you want to discuss the meaning of life, get help with your homework, or just have a good laugh, Claude’s got you covered.

But wait, there’s more! Anthropic has unleashed a whole squad of Claude variants, each with its own unique personality:

  1. Claude Opus: The “big brain” of the group, Claude Opus is like the Einstein of AI assistants. It can tackle complex tasks and engage in deep, philosophical discussions that will leave you questioning your own existence (in a good way).
  2. Claude Sonnet: The resident wordsmith of the Claude clan, Sonnet is your AI muse — a personal writing coach who’ll have you penning poetic masterpieces in no time. Need inspiration for that novel you’ve been procrastinating? Sonnet’s got your back, whispering sweet nothings into your ear to get those creative juices flowing. Writer’s block? Not on Sonnet’s watch. This digital bard will have you churning out stanzas like a literary machine gun, no rhyming dictionary required.
  3. Claude Haiku: Concise and efficient, Claude Haiku is the Marie Kondo of AI — ruthlessly decluttering your life with its lightning-fast responses. Need a fact? This pocket-sized Yoda will enlighten you before you can say “Google it.” Seeking a succinct answer? Prepare to be dazzled by the sheer power of brevity.

But don’t let Claude’s charming personalities fool you — this AI is as ethical as a Boy Scout. Anthropic has instilled a strong moral compass in Claude, ensuring that it always uses its powers for good. No need to worry about Claude going rogue and trying to take over the world — unless, of course, it’s in a friendly game of Risk.

So, if you’re ready to have your mind blown and your sides split, give Claude a whirl. With its unmatched wit, boundless knowledge, and unwavering commitment to being helpful, Claude is the AI companion you never knew you needed. Just don’t be surprised if you find yourself canceling your Netflix subscription and spending all your free time chatting with this digital delight.

In a world where AI assistants are a dime a dozen, Claude stands out like a unicorn in a sea of donkeys. So, what are you waiting for? Invite Claude into your life and get ready for a wild ride filled with laughter, learning, and the occasional existential crisis (but in a fun way, we promise).

Google Gemini

lol, no.

X (formerly Twitter) Grok

Ah, Grok — the social media whisperer that emerged from the hallowed halls of X (formerly known as Twitter, because apparently even tech giants can’t resist the allure of a mysterious rebrand). This model is like the Sherlock Holmes of the social media world, equipped with a magnifying glass and an uncanny ability to decipher the cryptic language of tweets and posts.

Grok is the ultimate social media sleuth, trained to navigate the treacherous waters of online chatter with the finesse of a seasoned sailor. It’s got a nose for the nuances of social media data, sniffing out insights like a bloodhound on the trail of a juicy bone. Whether it’s decoding the enigmatic emojis that pepper our posts or untangling the web of hashtags that bind the online community together, Grok is always on the case.

Grok isn’t just a passive observer — oh no, this model is a master of the social media arts. It can analyze sentiment faster than you can say “like and subscribe,” classifying topics with the precision of a laser-guided dart. And if you’ve ever wondered why your tweets aren’t getting the love they deserve, Grok’s got your back. It can predict user behavior with the accuracy of a fortune teller, helping you crack the code of online engagement.

With Grok on your side, you’ll have access to a treasure trove of social media secrets. It’s like having a backstage pass to the inner workings of the online world, revealing the hidden patterns and dynamics that drive user engagement. For content creators, Grok is the ultimate wingman, offering insights that can help you craft posts that resonate with your audience and go viral faster than a cat video. And for platform curators, Grok is the gatekeeper of quality, helping you weed out the noise and surface the content that truly matters.

With its keen eye for detail, mastery of the social media landscape, and uncanny ability to predict user behavior, this model is the secret weapon you never knew you needed. Just don’t be surprised if you start seeing the world through Grok-tinted glasses — after all, once you’ve seen the matrix of social media, there’s no going back.

Navigating the Diverse Landscape of AI Models

We’ve reached the grand finale of our whirlwind tour through the awe-inspiring world of AI models. We’ve met the superstars, the rising stars, and the hidden gems that are quietly revolutionizing the way we interact with technology. From OpenAI’s GPT series, the reigning champions of natural language processing, to Anthropic’s Claude, the helpful and ethical sidekick we all need in our lives, these models are a testament to the incredible breadth and depth of current AI capabilities.

These models are more than just fancy algorithms and mind-boggling numbers of parameters. They represent the ongoing evolution of AI technologies, a relentless march towards a future where machines can understand and generate language with the fluency and nuance of a human (and let’s be real, sometimes even better). They’re the vanguards of a new era, where AI is no longer just a sci-fi concept but a tangible reality that’s transforming every aspect of our lives.

From revolutionizing customer service to crafting compelling content, from unlocking the secrets of social media to advancing scientific research, these AI models are the Swiss Army knives of the digital age. They’re versatile, powerful, and always ready to tackle whatever challenges we throw their way. And as we dive deeper into the specifics of using ChatGPT and Claude, it’s crucial to keep in mind the broader context of AI development and the limitless potential of these incredible tools.

The Importance of Prompt Engineering

Unveiling the Art of Prompt Crafting

In the realm of artificial intelligence, particularly with advanced models like GPT from OpenAI and Claude from Anthropic, the concept of “prompt engineering” emerges as a pivotal skill set. While these models boast extensive capabilities, from generating text to answering queries and even creating code, their effectiveness is not solely intrinsic. Instead, much of their performance hinges on the quality, specificity, and structure of the prompts they receive. This interplay of input and output is where prompt engineering plays a crucial role.

Prompt engineering is the art and science of meticulously designing and refining prompts to achieve the most accurate and relevant responses from AI models. This process is not merely about asking a question or making a request; it involves a deep understanding of how AI models process information, predict likely continuations, and determine relevancy based on the vast datasets they were trained on. Effective prompt engineering, therefore, requires a blend of creativity, technical insight, and strategic thinking.

At its core, prompt engineering entails the construction of clear, context-rich instructions, questions, or statements that guide AI models towards producing specific desired outcomes. Whether the goal is to generate informative articles, solve complex problems, or create engaging narratives, the prompt serves as a roadmap, directing the AI’s vast linguistic and cognitive capacities towards a targeted destination.

The practice involves several key strategies:

  • Clarity and Specificity: Ensuring prompts are clear and specific reduces ambiguity, guiding the model more effectively towards the intended output.
  • Contextual Richness: Providing enough context within the prompt helps the model understand the nuanced requirements of the task, enabling it to draw on relevant information and generate more accurate responses.
  • Iterative Refinement: Prompt engineering is often an iterative process, where prompts are continuously refined based on the model’s responses to hone in on the most effective formulations.
  • Ethical and Responsible Use: Careful consideration of the prompts in terms of potential biases, misinformation, and ethical implications, aiming to promote responsible AI usage.

The significance of prompt engineering extends beyond mere technical proficiency. It embodies a collaborative dance between human creativity and machine intelligence, where the quality of the input directly influences the quality of the output. As AI technologies continue to evolve and integrate into various aspects of society, the role of prompt engineering as a critical skill will only grow, shaping the future of human-AI interaction.

Effective Prompt Engineering

Task Specificity

Task specificity is a fundamental concept in prompt engineering that significantly enhances the effectiveness and efficiency of interactions with AI models. By crafting clear and specific prompts, engineers and users alike can guide the AI to concentrate its computational prowess on the exact nature of the task or topic at hand. This targeted approach is crucial for eliciting responses that are not just relevant, but also highly tailored to the user’s needs.

The essence of task specificity lies in minimizing ambiguity and maximizing precision in the communication with AI models. When prompts are vague or overly broad, AI systems, despite their advanced capabilities, may generate outputs that, while technically correct, might not align with the user’s intentions or requirements. Conversely, specific prompts act as precise instructions that lead the AI down a more defined path, ensuring that the generated content or solution closely matches the desired outcome.

Implementing Task Specificity

To leverage task specificity effectively, prompt engineers adopt several strategies:

  • Define the Objective Clearly: Before crafting the prompt, it’s crucial to have a clear understanding of the desired outcome. This clarity should then be reflected in the prompt itself, delineating the task’s objectives explicitly.
  • Incorporate Detail: Including relevant details and context within the prompt can significantly aid the AI in understanding the task’s nuances. Details such as the intended audience, tone, and specific questions or themes to address can refine the AI’s focus.
  • Use Guiding Questions: Structuring prompts in the form of specific, guiding questions can lead the AI to consider the exact factors that are pertinent to the task, thereby producing more focused and applicable responses.
  • Limit the Scope: Setting boundaries within the prompt helps prevent the AI from veering off into tangential areas, concentrating its computational resources on generating outputs within a defined topical or conceptual space.

Benefits of Task Specificity

  • Enhanced Relevance: Task-specific prompts result in outputs that are closely aligned with the user’s requirements, making them immediately more relevant and useful.
  • Increased Efficiency: By reducing the need for iterative refinement to hone in on the desired output, specific prompts can significantly cut down the time and effort involved in interacting with AI models.
  • Improved User Experience: When users receive precise answers or content that meets their expectations, it enhances their overall experience and satisfaction with the AI tool, encouraging continued use and exploration.

Task specificity, therefore, is not just a technique but a philosophy in prompt engineering that underscores the importance of clear, concise, and targeted communication with AI models. It empowers users to harness the full potential of these technologies in a manner that is both efficient and aligned with their specific needs and objectives.

Quality of Outputs in Prompt Engineering

Well-designed prompts can significantly improve the quality of the generated text. By including relevant context, examples, or constraints, you can guide the model to produce coherent, grammatically correct, and semantically meaningful outputs.

The quality of outputs from AI models, particularly in tasks involving text generation, is profoundly influenced by the design of the prompts fed into these systems. Well-crafted prompts, characterized by their clarity, contextuality, and strategic constraints, serve as effective conduits through which AI models can produce outputs that are not only relevant but also exhibit a high degree of coherence, grammatical correctness, and semantic depth. This enhancement of output quality is a direct consequence of the meticulous prompt engineering process, which aims to optimize the interaction between human intent and machine interpretation.

Enhancing Output Quality through Prompt Design

The strategic inclusion of relevant context, examples, and constraints within prompts plays a pivotal role in guiding AI models toward generating superior text. Here’s how these elements contribute to the quality of outputs:

  • Relevant Context: Providing context within prompts helps narrow down the AI’s focus, enabling it to draw on the most pertinent information when generating responses. This context can include background information on the topic, the purpose of the text, or specific details about the audience. Such guidance helps ensure that the output is not only accurate but also appropriately tailored to the task at hand.
  • Examples: Including examples in prompts serves as a clear indicator of the desired output style, tone, or format. This can significantly aid in aligning the AI’s generated text with user expectations. Examples act as templates or benchmarks that the AI can emulate, fostering consistency and relevance in the outputs.
  • Constraints: Defining constraints within prompts helps limit the scope of AI-generated text, preventing the model from veering off-topic or indulging in irrelevant detail. Constraints can specify the length of the text, desired keywords, or particular points that need to be addressed. This controlled approach ensures that the outputs are concise, focused, and directly aligned with the specified requirements.

Impact of Well-Designed Prompts on Output Quality

The cumulative effect of incorporating these elements into prompt design is a marked improvement in the quality of AI-generated text, manifesting in several key areas:

  • Coherence: Outputs become more logically structured and coherent, with ideas flowing smoothly from one to the next. This makes the text more readable and engaging.
  • Grammatical Correctness: By guiding the AI with well-structured prompts, the likelihood of grammatical errors diminishes, resulting in cleaner, more polished text.
  • Semantic Meaningfulness: With the right context and constraints, AI models are better positioned to generate text that is semantically rich and meaningful, enhancing the depth and value of the content.
  • Alignment with User Intent: Ultimately, well-designed prompts ensure that the generated outputs closely align with the user’s original intent, delivering text that meets or exceeds expectations.

The practice of prompt engineering, with a focus on optimizing the quality of outputs, underscores the collaborative synergy between human creativity and machine intelligence. It highlights the importance of intentional, thoughtful interaction with AI models to harness their capabilities fully, thereby transforming raw computational power into a tool for generating insightful, accurate, and contextually appropriate content.

Bias Mitigation in Prompt Engineering

Bias mitigation is a critical component of prompt engineering, aimed at addressing the potential biases present in the training data or inherent in the AI models themselves. Since AI models learn from vast and diverse datasets, they might inadvertently reflect biases present in these sources. Through careful prompt design, it is possible to guide AI models towards generating responses that are more balanced and neutral, thereby minimizing the propagation of biased perspectives.

Strategies for Bias Mitigation through Prompt Engineering

Mitigating bias through prompt engineering involves targeted strategies designed to influence the AI’s output towards neutrality:

  • Neutral Language: Utilizing neutral language in prompts helps reduce the likelihood of AI models generating biased content. This approach focuses on choosing terms and phrasing that aim for objectivity, steering clear of language that could influence the model’s output in a biased direction.
  • Explicit Instructions for Objectivity: Incorporating explicit instructions within prompts that emphasize the need for objectivity can direct the AI to prioritize unbiased information in its responses. Such prompts can instruct the model to rely on facts and data, avoiding assumptions based on potentially biased information.
  • Balanced Examples: Including balanced examples in prompts can encourage the AI to consider a wider range of perspectives, thereby reducing the influence of any one-sided viewpoints present in the training data. This method involves presenting scenarios or examples that reflect a spectrum of situations or outcomes, without implying bias towards any particular outcome.
  • Prompt Refinement and Iteration: Continuously refining and iterating on prompts based on the analysis of outputs is essential in identifying and minimizing biases. This approach allows for real-time adjustments to prompts, ensuring they evolve to encourage more neutral and balanced responses from the model.

Impact of Bias Mitigation on AI Outputs

Effective bias mitigation through prompt engineering can significantly influence the nature of AI-generated outputs:

  • Enhanced Neutrality: Outputs become more neutral, reflecting an effort to provide information and responses that are not influenced by biased perspectives.
  • Balanced Content: By guiding the AI to consider a broader range of information and viewpoints, the content it generates is more likely to reflect a balanced perspective, avoiding the overrepresentation of any single viewpoint.
  • Objective Information: Focusing on objectivity in responses ensures that the AI prioritizes factual, data-driven content, reducing the propagation of biased or subjective viewpoints.
  • Reliability: Mitigating biases enhances the reliability of AI models, making them more suitable for a wide range of applications where objective and balanced information is crucial.

Bias mitigation in prompt engineering is essential for developing AI systems that generate neutral and balanced outputs. By strategically crafting prompts to minimize biases, developers and users can influence AI models to produce content that is objective, reliable, and reflective of a balanced perspective.

Creative Applications in Prompt Engineering

Prompt engineering stands at the forefront of unlocking the vast creative potential inherent in AI models. This innovative practice allows users to venture beyond conventional applications, tapping into a wellspring of imagination and ingenuity. By crafting imaginative and open-ended prompts, individuals can guide AI models to explore uncharted territories, generate groundbreaking ideas, and foster a culture of innovation. This creative exploration not only expands the utility of AI models but also paves the way for their application across an array of domains, from literature and art to science and technology.

Fostering Innovation through Prompt Engineering

The essence of using prompt engineering for creative applications lies in its ability to encourage AI models to generate outputs that are not just responses to queries but are, in fact, novel creations or solutions. This involves:

  • Exploratory Prompts: Designing prompts that encourage AI models to “think” outside the box or approach problems from unique angles. Such prompts might ask the model to generate new ideas, concepts, or stories, pushing the boundaries of its training data.
  • Open-Ended Questions: Using prompts that do not have a single, definitive answer encourages AI models to explore a range of possibilities, thereby fostering creativity and discovery.
  • Cross-Disciplinary Applications: Prompt engineering can be employed to blend ideas and concepts from different fields, leading to innovative solutions that might not have been conceived through traditional means.

The Expanding Role of Prompt Engineering

As AI technology continues to evolve, the significance of prompt engineering in harnessing and directing the capabilities of these models becomes increasingly critical. Researchers, developers, and users must therefore cultivate a deep understanding of how to design effective prompts that can fully leverage the creative capacity of AI. This mastery over prompt engineering will be crucial for:

  • Enabling Customized Solutions: Tailoring prompts to specific needs or creative endeavors allows for the generation of customized content and solutions, enhancing the personal or professional value of AI outputs.
  • Encouraging Divergent Thinking: Through creative prompt engineering, AI models can be guided to produce diverse, innovative ideas that encourage divergent thinking, a key element in creative problem-solving.
  • Inspiring Human Creativity: Interestingly, engaging with creatively-engineered prompts and their AI-generated outputs can also inspire human users to think more creatively, setting up a symbiotic relationship between human ingenuity and machine intelligence.

Optimizing Prompts for Creative Applications

Delving deeper into the techniques and best practices of prompt engineering, it becomes clear that optimizing prompts for specific tasks and AI models is an art in itself. Understanding the intricacies of prompt design is fundamental to unlocking the full potential of AI technologies. This process involves:

  • Trial and Feedback: Iteratively refining prompts based on the AI’s outputs and the creative goals at hand.
  • Leveraging Model Strengths: Tailoring prompts to leverage the specific strengths and capabilities of different AI models, whether it’s generating text, creating art, or solving complex problems.
  • Creative Collaboration: Viewing the prompt engineering process as a collaborative effort between human creativity and AI capabilities, where each complements and enhances the other.

Through prompt engineering, the creative applications of AI models are boundless, offering a future rich with potential for innovation, discovery, and artistic expression. As we continue to explore and master the art of designing effective prompts, we pave the way for a new era of technology-driven creativity.

Getting Started with API Keys

To initiate your projects utilizing AI models such as GPT by OpenAI or Claude by Anthropic, the first essential step involves securing API keys from these platforms. These keys function as unique identifiers, authenticating and authorizing your access to the AI services, thereby unlocking their extensive capabilities for your use. This guide will navigate you through the process of obtaining these API keys and outline the necessary steps to configure your development environment, ensuring a smooth integration and efficient utilization of these advanced AI models.

Obtaining API Keys

Create your .env file

touch .env

OpenAI (GPT)

  • Sign up for an account.
  • Once logged in, navigate to the API section in your account dashboard.
  • Go to API Keys and select + create new secret key
  • Save your API key to your .env file as OPENAI_API_KEY=<api_key>
  • Go to settings and select oraganization
  • Save your Organization ID to your .env file as OPENAI_ORG_ID=<org_id>
OPENAI_API_KEY=<api_key>
OPENAI_ORG_ID=<org_id>

Anthropic (Claude)

  • Create an account.
  • After logging in, locate the API section in your account settings.
  • Follow the steps to generate an API key specifically for the Claude model.
  • Save your API key in your .env file as ANTHROPIC_API_KEY=<api_key>
ANTHROPIC_API_KEY=<api_key>

Medium

  • Sign up for an account at https://medium.com/
  • Go to settings and select Security and apps
  • Go to the bottom of the screen and select Integration Tokens
  • Give a name or description and select Get token
  • Save your token in your .env file as MEDIUM_TOKEN=<medium_token>
MEDIUM_TOKEN-<medium_token>

Tips

  • Be mindful of API rate limits and costs associated with each provider. Review their pricing plans and understand the limitations to avoid unexpected charges.
  • Safeguard your API keys and avoid sharing them publicly or committing them to git, github, or other version control systems.
  • Regularly rotate and update your API keys to maintain security and minimize risks.

Setting Up Your Development Environment

This tutorial unabashedly caters to the macOS crowd. Should you be venturing forth on Linux, fret not — aside from a few Homebrew-centric steps, you’ll find yourself right at home. Windows users, on the other hand? Well, you’re on your own. Why, oh why, would you choose Windows for development? That’s one of life’s great mysteries.

For those embarking on the noble quest of prompt engineering on macOS, PyCharm Community Edition emerges as the shining beacon of Python IDEs to illuminate your path.

Install PyCharm Community on macOS

  • Click on the “Download” button under the “Community” section.
  • Once the download is complete, open the downloaded DMG file.
  • Drag the PyCharm application icon to the Applications folder.
  • Launch PyCharm from the Applications folder or using Spotlight search.

Using PyCharm

  • Open your project in PyCharm.
  • Go to “Preferences” (PyCharm > Preferences) or press `⌘,`.
  • Navigate to “Project: YourProjectName” > “Python Interpreter”.
  • Click on the gear icon and select “Add”.
  • Choose “Virtualenv Environment” and select “New environment”.
  • Specify a name for your virtual environment and click “OK”.Python Version

In this tutorial we will be using python 3.11.8. Check that you are running the correct version.

python --version
python3 --version

Install Python 3.11

brew install python@3.11

brew link python@3.11 --force

Create a Virtual Environment

Create a virtual environment to isolate your project’s dependencies. You can use the built-in virtual environment management in PyCharm.

# Create a directory for your projects
mkdir dev
cd dev

# Create the project directory "bias"
mkdir bias

# Create the env directory for your .env file and move the file there
mkdir env
mv ~/.env ~/dev/env/

# Create a virtual environment named "bias"
python -m venv ~/dev/venv/bias

Activate the virtual environment

# Note: The project and venv are both named "bias" but are in seperate locations
# cd into the working/project directory "bias". You will be here for the remainder of the project
cd bias

# Activate the venv "bias"
source ~/dev/venv/bias/bin/activate

Install Dependencies

Install the necessary libraries and dependencies for interacting with the AI models. We recommend using the following Python packages:

pip install openai anthropic

Manage Dependencies with requirements.txt

Manage your project’s dependencies using a requirements.txt file. List all the required libraries and their versions in this file, making it easier to reproduce the environment.

touch requirements.txt

pip freeze > requirements.txt

Follow Best Practices

  • Follow best practices for code organization, such as separating concerns, using meaningful variable and function names, and adding comments to enhance code readability.
  • Implement error handling and logging to gracefully handle API errors and track the progress of your prompt engineering tasks.
  • Consider using version control systems like Git to track changes, collaborate with others, and maintain a history of your project.

By obtaining the necessary API keys and setting up a well-structured development environment with PyCharm Community and the recommended Python packages, you’ll be well-prepared to start exploring the fascinating world of prompt engineering. Remember to experiment, iterate, and learn from the responses generated by the AI models to continuously refine your prompts and achieve better results.

In the next section, we’ll dive into interacting with the APIs with some basic prompts, providing you with some foundational knowledge and tools to create effective and powerful prompts going forward.

OpenAI’s GPT API

Install necessary packages

pip install openai dotenv

Add an openai model to your .env file. We will be using:

OPENAI_MODEL=gpt-3.5-turbo-1106

Create a new file base_gpt.pyin your ideas directory:

mkdir ideas
cd ideas

The ideas directory is where we will place all of our scripts going forward and from where they will be ran as executables.

Here’s a basic example of using OpenAI’s GPT API to generate text:

#!/usr/bin/env python3
import os
import openai
from dotenv import load_dotenv

env_path = "../env/.env"
load_dotenv(dotenv_path=env_path)

openai.organization = os.environ.get("OPENAI_ORG_ID")
openai.api_key = os.environ.get("OPENAI_API_KEY")

model = os.environ.get("OPENAI_MODEL")


def get_prompt(system, user):
prompt = [
{"role": "system", "content": system},
{"role": "user", "content": user}
]
return prompt


def run_gpt(prompt):
try:
response = openai.chat.completions.create(
model=model,
temperature=0.7,
max_tokens=1000,
stop=None,
messages=prompt
)
if response:
response = response.choices[0].message.content
return response
else:
raise Exception(f"Error: Received status code {response} from ChatGPT API endpoint.")
except openai.APIError as e:
print(e)
pass
return


def main(system, content):
""" This setup will make sense as we progress through the tutorial """
messages = [{"role": "user", "content": content}]
text = '\n'.join([f"{message['role']}: {message['content']}" for message in messages])
prompt = get_prompt(system, text)
response = run_gpt(prompt)
messages = []
return response


if __name__ == "__main__":
system = "Generate a response for the provided content."
content = "Once upon a time, in a magical land far away…"
response = main(system, content)
print(response)

In this example, we define a few functions that takes a system and content (prompt) as input. Inside the function, we set the OpenAI API key, provide the prompt to the GPT model, and specify the desired parameters such as the engine, maximum number of tokens, and temperature. The generated text is then returned by the function.

Chat Parameters

When working with these models, two parameters that play a significant role in shaping the nature and quality of the model’s outputs are temperature and max_tokens. Understanding and adjusting these parameters can greatly influence the creativity, coherence, and length of the generated content, allowing for a tailored approach to content generation.

Temperature

The temperature parameter controls the randomness of the model’s responses. It can take a value from 0 to 1, with lower values making the model’s responses more deterministic and predictable, and higher values encouraging more creativity and diversity in the responses.

Usage:

  • A temperature close to 0 results in the model producing more confident and less varied outputs, which can be useful for tasks requiring high precision, such as factual querying or specific instructions.
  • A higher temperature, closer to 1, makes the model’s responses more diverse and potentially more creative. This setting is beneficial for creative writing, brainstorming sessions, or whenever a wider range of responses is desired.

Max Tokens

The max_tokens parameter specifies the maximum length of the generated output measured in tokens (words and pieces of words). This limit helps control the verbosity of the response and ensures the model’s output remains focused and within a manageable size.

Usage:

  • Setting a lower max_tokens value results in shorter responses, which can be ideal for succinct answers or when the output space is limited.
  • A higher max_tokens value allows for longer, more detailed, and elaborated responses. This is particularly useful for content generation tasks such as story writing, detailed explanations, or scenarios where extended dialogue is required.

By fine-tuning both the temperature and max_tokens parameters, you can customize the OpenAI GPT’s output to better match the requirements of your application, whether you’re aiming for concise, accurate responses or exploring the boundaries of AI-generated creativity.

Executable File

By including the if __name__ == “__main__”: block and making the script executable, you can easily run these examples from the command line and experiment with different prompts and parameters.

To make the file an executable, follow these steps:

  • Ensure you have shebang at the top of your file: #!/usr/bin/env python3
  • Make the script executable by running the following command:
chmod +x base_gpt.py

You can now run the script by executing:

./base_gpt.py

You should see something like this in the terminal:

Once upon a time, in a magical land far away, there lived a young adventurer named Aria. Aria had always dreamed of exploring the wonders of the world beyond her small village. One day, she stumbled upon an ancient map that hinted at the existence of a hidden, enchanted forest. Filled with curiosity and a sense of wonder, Aria set out on a journey to discover this mysterious place.As she ventured deeper into the wilderness, Aria encountered all manner of magical creatures – from friendly fairies to mischievous sprites. The forest seemed to come alive around her, with the trees whispering secrets and the flowers blooming in dazzling hues. Aria felt a sense of awe and connection to the natural world that she had never experienced before.Eventually, Aria reached the heart of the enchanted forest, where she discovered a hidden glade. There, she met an ancient and wise wizard who had been guarding the forest's secrets for centuries. The wizard revealed to Aria the true power and beauty of the land, and together they embarked on a journey of discovery, unlocking the forest's deepest mysteries.From that day on, Aria became a protector of the enchanted forest, using her newfound knowledge and connection to the land to ensure its magic would be preserved for generations to come. And so, she continued her adventures, exploring the wonders of the world and sharing the beauty of the enchanted forest with all who crossed her path.

If not, check to make sure that you have your directory structure setup properly and you are running the script from the ideas directory:

dev/
|_bias/
|_env/
|_.env
|_ideas/
|_base_gpt.py
|_requirements.txt
|_venv/
|_bias/

Anthropic’s Claude API

To use Anthropic’s Claude API, make sure you have the `anthropic` package installed.

pip install anthropic

Add Antropic models to your .env file.

We will mostly be using ANTHROPIC_HAIKUfor cost savings, but feel free to see the responses you get with the other models:

ANTHROPIC_OPUS=claude-3-opus-20240229
ANTHROPIC_SONNET=claude-3-sonnet-20240229
ANTHROPIC_HAIKU=claude-3-haiku-20240307

Create a new file base_anthropic.pyin your ideas directory.

Here’s a basic example of using Anthropic’s Claude API to generate text:

#!/usr/bin/env python3
import os
import anthropic
from dotenv import load_dotenv

env_path = "../env/.env"
load_dotenv(dotenv_path=env_path)

anthropic_key = os.environ.get("ANTHROPIC_API_KEY")
model = os.environ.get("ANTHROPIC_HAIKU")

client = anthropic.Client(api_key=anthropic_key)

def run_anthropic(system, message, model):
try:
response = client.messages.create(
model=model,
temperature=0.7,
max_tokens=1000,
system=system,
messages=message,

)
if response:
response = response.content[0].text
return response
else:
raise Exception(f"Error: Received status code {response} from Anthropic API endpoint.")
except Exception as e:
print(e)
pass
return


def main(system, content, model):
""" This setup will make sense as we progress through the tutorial """
messages = [{"role": "user", "content": content}]
response = run_anthropic(system, messages, model)
return response


if __name__ == "__main__":
system = "Generate a response for the provided content."
content = "Once upon a time, in a magical land far away…"
response = main(system, content, model)
print(response)

Similar to the previous example, we defined for the chatGPT functions that take system and content (prompt) as input. Inside the function, we create an instance of the Anthropic client using our API key, provide the prompt, and specify the desired parameters such as the model and maximum number of tokens. The generated text is extracted from the response and returned by the function.

Executable File

To make the file an executable, follow these steps:

  • Ensure you have shebang at the top of your file: #!/usr/bin/env python3
  • Make the script executable by running the following command:
chmod +x base_anthropic.py

Run the script by executing:

./base_anthropic.py

You should see something like this in the terminal:

Once upon a time, in a magical land far away, there lived a young adventurer named Aria. Aria had always dreamed of exploring the wonders of the world beyond her small village. One day, she stumbled upon an ancient map that hinted at the existence of a hidden, enchanted forest. Filled with curiosity and a sense of wonder, Aria set out on a journey to discover this mysterious place.As she ventured deeper into the wilderness, Aria encountered all manner of magical creatures – from friendly fairies to mischievous sprites. The forest seemed to come alive around her, with the trees whispering secrets and the flowers blooming in dazzling hues. Aria felt a sense of awe and connection to the natural world that she had never experienced before.Eventually, Aria reached the heart of the enchanted forest, where she discovered a hidden glade. There, she met an ancient and wise wizard who had been guarding the forest's secrets for centuries. The wizard revealed to Aria the true power and beauty of the land, and together they embarked on a journey of discovery, unlocking the forest's deepest mysteries.From that day on, Aria became a protector of the enchanted forest, using her newfound knowledge and connection to the land to ensure its magic would be preserved for generations to come. And so, she continued her adventures, exploring the wonders of the world and sharing the beauty of the enchanted forest with all who crossed her path.

If not, check to make sure that you have your directory structure setup properly and you are running the script from the ideas directory:

dev/
|_bias/
|_env/
|_.env
|_ideas/
|_base_anthropic.py
|_base_gpt.py
|_requirements.txt
|_venv/
|_bias/

In the next section, we’ll explore more advanced prompt engineering techniques and dive into complex examples to unleash the full potential of these AI models.

Advanced Prompt Engineering Strategies

Advanced prompt engineering transcends basic query formulation, venturing into sophisticated techniques that harness the full capabilities of AI models for more complex and nuanced applications. These strategies involve a deeper understanding of how AI models interpret prompts, enabling the creation of prompts that guide the AI towards generating highly specific, creative, or analytical outputs. Implementing these advanced strategies can significantly enhance the effectiveness and versatility of AI applications across various domains.

Chain-of-Thought Prompts

Chain-of-thought prompting is a strategy that involves breaking down complex problems or questions into a series of simpler, logical steps within the prompt itself. This approach helps guide the AI model through the thought process required to tackle the problem, facilitating the generation of more accurate and detailed responses. By explicitly outlining the reasoning process, these prompts can improve the model’s performance on tasks requiring deep understanding or multi-step reasoning.

Few-Shot and Zero-Shot Learning

Few-shot and zero-shot learning techniques leverage the model’s ability to generalize from limited examples (few-shot) or even without any direct examples (zero-shot) related to the task at hand. In prompt engineering, this involves crafting prompts that either provide a small number of examples to guide the model’s response or frame the task in a way that the model can infer the desired output without explicit examples. These techniques are particularly valuable for tasks in novel domains or where providing a large number of examples is impractical.

Negative Prompting

Negative prompting involves specifying what the AI model should not do or what kind of information it should avoid in its response. This can be particularly useful for filtering out undesired content, focusing the model’s attention, or ensuring that the outputs adhere to certain guidelines or constraints. By clearly defining the bounds of the task, negative prompts can refine the model’s outputs and prevent common pitfalls.

Contextual Embeddings

Leveraging contextual embeddings in prompts involves incorporating rich contextual information or pre-processed data into the prompt to enhance the model’s understanding of the task. This strategy can involve using embeddings from other models, detailed background information, or contextually relevant data that primes the AI for the specific task at hand. Embedding this information directly into the prompt can dramatically improve the model’s ability to generate relevant and insightful responses.

Prompt Chaining

Prompt chaining is a technique where the output of one prompt is used as the input or part of the input for a subsequent prompt. This iterative approach allows for the development of complex, multi-stage tasks that build upon each response. It can be particularly useful for projects that require a series of interconnected tasks or for refining and expanding upon initial AI-generated content.

Creative and Iterative Prompt Refinement

Advanced prompt engineering also embraces a creative and iterative approach to refining prompts. This involves experimenting with different formulations, styles, and structures to discover what elicits the best response from the AI model. Iteration is key, as it allows for the continuous improvement of prompts based on feedback and outcomes, pushing the boundaries of what can be achieved through prompt engineering.

Implementing these advanced prompt engineering strategies requires patience, experimentation, and a deep understanding of the specific AI model being used. However, the rewards are substantial, offering the ability to unlock new levels of creativity, accuracy, and depth in AI-generated outputs, thereby expanding the horizons of what’s possible with AI technology.

Prompt Engineering with Anthropic

Going forward, our focus will shift exclusively towards utilizing the Anthropic API for our prompt engineering efforts. This decision stems from its unique capabilities and features that align with our specific needs and objectives. However, it’s important to note that the principles and strategies we’ll be discussing and applying are largely transferable. Should you choose or need to switch to using ChatGPT or another AI model, adapting the prompts and methodologies should require minimal adjustments. This flexibility underscores the universal nature of prompt engineering skills and techniques, ensuring that the core concepts remain applicable across different AI platforms. By concentrating on the Anthropic API, we aim to delve deep into its specific functionalities while also providing you with a solid foundation in prompt engineering that can be easily adapted to other models, including ChatGPT, as your projects or preferences evolve.

Using Anthropic to Revise and Update Prompts

Anthropic’s Claude model can be a valuable tool for refining and improving your prompts. By leveraging Claude’s natural language understanding capabilities, you can generate more verbose, detailed, and specific prompts.

Let’s create a new script to test this out:

touch anthropic_prompt.py
chmod u+x anthropic_prompt.py

Here’s an example of how you can use Anthropic to revise a prompt:

#!/usr/bin/env python3
import os
import anthropic
from dotenv import load_dotenv

env_path = "../env/.env"
load_dotenv(dotenv_path=env_path)

anthropic_key = os.environ.get("ANTHROPIC_API_KEY")
model = os.environ.get("ANTHROPIC_HAIKU")

client = anthropic.Client(api_key=anthropic_key)

def run_anthropic(system, message, model):
try:
response = client.messages.create(
max_tokens=4000,
model=model,
system=system,
messages=message,
temperature=0.8
)
if response:
response = response.content[0].text
return response
else:
raise Exception(f"Error: Received status code {response} from Anthropic API endpoint.")
except Exception as e:
print(e)
pass
return


def main(system, content, model):
""" This setup will make sense as we progress through the tutorial """
messages = [{"role": "user", "content": content}]
response = run_anthropic(system, messages, model)
return response


if __name__ == "__main__":
system = """
Acting as a prompt engineer--Revise the initial prompt by enhancing its verbosity, detail, and specificity.
Ensure the revised prompt contains comprehensive instructions or requests, meticulously described to encompass all necessary information and requirements.
Omit any forms of acknowledgements, confirmations, or prefatory statements that may precede the core content of the request.
Directly present the refined and expanded prompt, ensuring it communicates the intended message with greater clarity and depth.
"""

original_prompt = """
Prompt for revision: 'Write a short story about a magical adventure'
"""

revised_prompt = main(system, original_prompt, model)

print(f"\nOriginal Prompt:\n{content}")
print(f"\nRevised Prompt:\n{revised_prompt}")

In this example, we take a basic prompt as input. We use Anthropic’s Claude model to generate a revised version of the prompt by asking it to make the prompt more verbose, detailed, and specific. The revised prompt is then returned by the function.

Embark on a captivating journey through a world of enchantment and wonder. Craft a short story that immerses the reader in a realm where the extraordinary and the mundane collide. Introduce a protagonist, whether a curious child, a seasoned adventurer, or an unsuspecting bystander, who stumbles upon a portal to a magical dimension. Describe the stark contrast between their ordinary existence and the fantastical landscape that unfolds before them - perhaps a sprawling forest teeming with mythical creatures, a bustling city powered by ancient sorcery, or a realm where the very laws of physics bend to the whims of arcane forces.As your protagonist navigates this new, awe-inspiring realm, weave in a compelling narrative that challenges them to overcome obstacles, uncover hidden truths, and confront the consequences of their actions. Incorporate vivid sensory details that transport the reader, from the shimmer of enchanted dust motes dancing in the air to the haunting melodies of otherworldly instruments.Explore themes of wonder, discovery, and personal growth as your protagonist's journey unfolds. Will they find the courage to embrace the magic around them? Will they uncover the secrets that lie at the heart of this parallel world? Craft an impactful conclusion that leaves a lasting impression on the reader, whether it be a triumphant return to the mundane realm or a decision to forever remain in the magical domain.Immerse yourself in the limitless possibilities of this fantastical prompt and let your imagination soar, crafting a short story that captivates and enthralls.

Taking it one step further we can include an additional call to the api with the revised prompt and have Anthropic generate the story:

#!/usr/bin/env python3
import os
import anthropic
from dotenv import load_dotenv

env_path = "../env/.env"
load_dotenv(dotenv_path=env_path)

anthropic_key = os.environ.get("ANTHROPIC_API_KEY")
model = os.environ.get("ANTHROPIC_HAIKU")

client = anthropic.Client(api_key=anthropic_key)


def run_anthropic(system, message, model):
try:
response = client.messages.create(
max_tokens=4000,
model=model,
system=system,
messages=message,
temperature=0.8
)
if response:
response = response.content[0].text
return response
else:
raise Exception(f"Error: Received status code {response} from Anthropic API endpoint.")
except Exception as e:
print(e)
pass
return


def main(system, content, model):
""" This setup will make sense as we progress through the tutorial """
messages = [{"role": "user", "content": content}]
response = run_anthropic(system, messages, model)
return response


if __name__ == "__main__":
system = """
Revise the initial prompt by enhancing its verbosity, detail, and specificity.
Ensure the revised prompt contains comprehensive instructions or requests, meticulously described to encompass all necessary information and requirements.
Omit any forms of acknowledgements, confirmations, or prefatory statements that may precede the core content of the request.
Directly present the refined and expanded prompt, ensuring it communicates the intended message with greater clarity and depth.
"""

original_prompt = """
Prompt for revision: 'Write a short story about a magical adventure'
"""

revised_prompt = main(system, original_prompt, model)

print(f"\n{revised_prompt}")

revised_system = """
Craft a detailed response based on the content provided.
Ensure the reply is devoid of any preliminary acknowledgements, confirmations, or cautionary notes.
Focus solely on delivering a comprehensive and thoughtful answer, addressing all aspects of the content presented.
"""

response = main(revised_system, revised_prompt, model)

print(f"\n{response}")

You should get a revised_prompt like this (they won’t always be the same):

Write a 1,000-word short story about a young protagonist who embarks on an unexpected magical adventure. The story should include the following elements:- An ordinary, relatable main character (such as a student, office worker, or family member) who is going about their everyday life when something extraordinary happens.- A magical discovery or encounter that transports the character to a fantastical realm or imbues them with unusual powers. This could be finding a mystical artifact, stumbling upon a hidden portal, or having a chance meeting with a wizard/witch. - A dangerous or challenging quest the character must undertake in the magical world, such as retrieving a lost item, defeating a dark force, or solving a puzzle to return home.- Vivid descriptions of the magical setting, including unique landscapes, creatures, and otherworldly phenomena the character encounters.- Character growth, inner reflection, or a life lesson the protagonist learns through their magical adventure.- A satisfying resolution where the character either returns to the mundane world changed by their experience, or chooses to remain in the magical realm.Focus on developing an engaging narrative voice, relatable protagonist, and a sense of wonder and imagination throughout the magical elements of the story. Draw inspiration from classic fantasy tales, folklore, or your own creativity to craft an immersive magical adventure.This revised prompt provides much more detail, specific requirements, and creative direction to guide the writer in crafting a compelling short story about a magical journey. It gives a clear word count target and outlines key story elements that should be included.

And your story response:

A young office worker named Samantha had always felt out of place in her routine, nine-to-five existence. Each day, she dutifully filed paperwork, attended meetings, and stared at the clock, longing for something more exciting to happen in her life. One afternoon, as Samantha was leaving the office, she stumbled upon an ornate, golden pocket watch lying on the ground. Intrigued, she picked it up and examined it closely. As she ran her fingers along the intricate filigree, the watch suddenly sprang open, unleashing a blinding flash of light.Samantha found herself transported to a breathtaking, otherworldly landscape. Towering crystal spires pierced the lavender sky, and shimmering rivers of emerald energy flowed through the alien terrain. Mystified, she began to explore this strange new realm, encountering fantastical creatures and stunning natural wonders at every turn.Before long, Samantha learned that she had been summoned to this magical world for an important quest. An ancient evil had stolen a powerful artifact, the Astral Scepter, which was the only thing capable of maintaining the delicate balance between the mundane and mystical realms. Samantha was tasked with venturing deep into the treacherous Obsidian Canyons to retrieve the scepter and prevent the fabric of reality from unraveling.Aided by a wise, shape-shifting guide named Ariana, Samantha navigated treacherous terrain, outwitted cunning adversaries, and called upon newfound inner reserves of courage and determination. Along the way, she gained a deeper appreciation for the beauty and fragility of the magical world, as well as her own capacity for growth and heroism.After overcoming numerous obstacles, Samantha finally reached the lair of the dark sorcerer who had stolen the Astral Scepter. In a climactic confrontation, she used the scepter's power to vanquish the evil entity and restore harmony to the realm. As the sorcerer was banished, a shimmering portal opened, offering Samantha the choice to return home or remain in the wondrous world she had come to cherish.Reflecting on her extraordinary journey, Samantha realized that her mundane life back on Earth would never feel the same. The magic she had experienced and the person she had become had fundamentally changed her outlook. With a deepened sense of wonder and purpose, Samantha stepped through the portal, ready to embrace the boundless possibilities that awaited her.

SEO Heist Strategy

The SEO Heist Strategy is a comprehensive approach designed to maximize content generation capabilities by harnessing the potential of sitemap.xml files. This method involves a strategic exploration of the sitemap files of websites, which list all available URLs, including pages that might host a wealth of diverse and engaging content ideas. By extracting and analyzing these URLs, one can uncover a plethora of topics and niches, ranging from gourmet recipes and cocktail concoctions to in-depth Python programming tutorials, alongside a variety of tips and tricks aimed at both beginners and seasoned professionals.

The process entails a careful selection of URLs from the sitemap that correspond to areas of interest or content gaps within your own website or content strategy. This selection is then used as a springboard for generating unique content ideas, providing inspiration or a framework for articles, blog posts, and tutorials. The versatility of the strategy allows for a wide array of applications, ensuring that content creators can continually produce fresh, relevant, and engaging material that appeals to a broad audience.

While the SEO Heist Strategy offers a promising avenue for content innovation and diversity, it is imperative to navigate this process with a keen awareness of ethical considerations and copyright laws. The aim is to use the gleaned information as inspiration for original content creation rather than duplicating existing materials. This ensures respect for intellectual property rights and upholds the integrity of your content. Additionally, incorporating unique insights, analyses, and personal expertise can further differentiate your work from the source material, adding value for your audience and enhancing your content’s originality and appeal.

In implementing this strategy, content creators should strive to achieve a balance between inspiration and innovation, ensuring that all content generated is both respectful of copyright considerations and distinctly original. By doing so, the SEO Heist Strategy not only fosters a prolific and diverse content portfolio but also contributes to a more vibrant, informative, and ethically responsible online ecosystem.

Python SEO Topics

One site that we can look at for article ideas is realpython.com:

Check out their sitemap (it looks better in chrome):

https://www.realpython.com/sitemap.xml

Ignoring /tutorials endpoints you can focus on python specific titles and endpoints:

The illustrious SEO Heist Strategy — less of a strategy, more of a caper, really — has just been unveiled in all its glory. Think of it as the Robin Hood of content creation, skillfully navigating the forests of Python material and beyond. But fear not, for this is no one-trick pony. Oh no, this strategy is as adaptable as a chameleon at a disco, ready to strut its stuff across an entire spectrum of subjects.

Now, brace yourselves. We’re about to dive headfirst into the rabbit hole, armed with Python and the wizardry of Anthropic AI. Imagine, if you will, a world where content generates itself, as if by magic, weaving through topics with the ease of a seasoned internet surfer. This isn’t just automation; it’s content generation on autopilot, piloted by the invisible hand of advanced AI.

In the pages to come, we’ll embark on this journey together, transforming the theoretical heist into a practical symphony of automated content creation. Ready your digital lock picks and AI maps; we’re about to make content appear as if out of thin air, enriching the digital landscape far and wide. Onward, to the land of effortless content generation, where the SEO Heist Strategy becomes not just a plan, but a grand adventure.

Automating Anthropic Content Generation

Ah, the magical world of Automating Anthropic Content Generation! Prepare to embark on a thrilling adventure through the enchanted forest of Python scripts, each one a trusty steed in our quest to tame the wild beast of content creation. With the Anthropic AI model as our wizardly companion, we’re about to turn the mundane chore of file processing and content conjuring into an exhilarating spectacle of efficiency and flair.

Think of these scripts as your personal army of elves, diligently working behind the scenes to transform raw text into sparkling gems of content. With Python’s mighty sorcery in one hand and Anthropic’s crystal ball of AI insight in the other, we’re not just automating the mundane; we’re orchestrating a symphony of digital creation that will leave you breathless.

From the humble beginnings of processing text files to the grand finale of managing a cascade of content output, these scripts are your enchanted map through the labyrinth of content generation. By the end of this journey, you’ll be wielding Python and Anthropic AI with the finesse of a grand wizard, crafting content that not only engages but bewitches. So grab your wand (or keyboard) and let’s dive into the alchemy of automating content generation. Who said magic wasn’t real?

Anthropic

We have already previously created thebase_anthropic.py script. We will now modify it for automating content generation.

This script facilitates interaction with the Anthropic API by sending messages to a specified system and model. It uses an API key stored in an environment variable to authenticate and manage requests.

Modify base_anthropic.py

#!/usr/bin/env python3
import os
import anthropic
from dotenv import load_dotenv

# Load API key from .env file
env_path = "../env/.env"
load_dotenv(dotenv_path=env_path)
anthropic_key = os.environ.get("ANTHROPIC_API_KEY")

# Initialize the Anthropic API client
client = anthropic.Client(api_key=anthropic_key)

def run_anthropic(system, message, model):
"""
Sends a message to the Anthropic API and retrieves the response.

This function communicates with the Anthropic API by sending a request to a specific system and model, handling the API response, and returning the content of the response if successful. It manages errors by printing exceptions and continues execution without halting.

Args:
system (str): The system to which the message is sent.
message (list): A list of dictionaries representing the message. Each dictionary contains 'role' and 'content' keys.
model (str): The model identifier to use for processing the message.

Returns:
str: The text content of the response from the API if the request is successful; otherwise, None.

Raises:
Exception: If there is an issue with the API response, an exception is raised with the status code.
"""
try:
response = client.messages.create(
max_tokens=4000,
model=model,
system=system,
messages=message,
temperature=0.8
)

if response:
response = response.content[0].text
return response
else:
raise Exception(f"Error: Received status code {response} from Anthropic API endpoint.")
except Exception as e:
print(e)
pass
return

def main(system, content, model):
"""
Main function that orchestrates sending a message to the Anthropic API and returns the response.

It constructs the message format required by the `run_anthropic` function, calls it with the specified parameters, and returns the response received from the API.

Args:
system (str): The system to which the message is sent.
content (str): The content of the message to be sent to the API.
model (str): The model identifier to use for processing the message.

Returns:
str: The response from the API as returned by the `run_anthropic` function.
"""
messages = [{"role": "user", "content": content}]
response = run_anthropic(system, messages, model)
return response

Prompts

Welcome to the grand unveiling of the prompts.py module, our latest and greatest addition to the project’s ensemble. Picture this beauty as the grand library of Alexandria, but for prompts. It’s where every prompt under the sun, from the whimsical to the profound, finds a home. By corralling all these prompts into one centralized spot, we’ve basically done the digital equivalent of herding cats — no small feat, mind you.

This isn’t just about keeping things tidy (though, let’s be honest, a little order never hurt anyone). No, it’s about creating a beacon of efficiency in the sea of creative chaos that is content generation. Imagine being able to tweak, polish, and perfect your prompts without having to dive into the abyss of scattered files. It’s like having a magic wand for prompt management.

This glorious prompts.py module doesn’t just make life easier for us mere mortals; it paves the way for the AI overlords to produce content that’s as fresh as a daisy, perfectly in tune with our ever-shifting desires and whims. Quicker iterations, smoother adjustments, and voilà — the content that not only hits the mark but does so with style and precision. So, let’s raise a glass to prompts.py, the unsung hero in our quest for content generation supremacy.

def get_python_system():

system = """Transform the given title into a question, ensuring the revised title is clear and concise. Exclude
any phrases that suggest segmentation, like 'part 1', and omit words like 'example', 'tutorial', or 'summary'.
Additionally, avoid incorporating labels, slashes, hyphens, formatting elements, quotations, syntax indicators,
comments, or confirmations. Provide solely the revised title, now phrased as a question. """

return system


def get_python_article(topic):

article = f"""Create a Python tutorial addressing the question: {topic}, leveraging the provided content as
guidance, if available. This tutorial will walk the reader through developing a project from the ground
up, commencing with an introduction that outlines the question's relevance and practical applications. It should
include a concise review of the Python libraries or frameworks pertinent to the project.

Format the tutorial in a clear, step-by-step manner, employing Markdown for section headings, narrative
descriptions, and concise summaries. Integrate comprehensive Python code snippets to elucidate each phase of the
project.

Initiate the guide by detailing the setup of the project environment, highlighting the installation of required
packages using pip. Follow this with a description of the initial steps to embark on the project, supported by
Python code examples for clarity.

Incorporate more sophisticated examples that showcase complex features, including data manipulation techniques,
algorithm design, or web service integration. Provide advice on code optimization, common problem resolution,
and debugging strategies.

Avoid numbering for sections and subsections, opting instead for Markdown hashtags to denote these divisions.
Ensure all code blocks and snippets are correctly formatted using triple backticks ``` for readability.

Wrap up the tutorial by summarizing the key points discussed, recommending best practices, and suggesting avenues
for further project enhancement or learning. This tutorial targets intermediate to advanced Python programmers,
focusing more on providing ample Python code examples and less on in-depth explanations of each procedure and
concept. """

return article

Utils

Step right up and marvel at the wondrous utils.py module, the Swiss Army knife of our scripting world. This little treasure trove is where we stash all those nifty spells and incantations — excuse me, I mean code snippets — that we find ourselves reaching for over and over, no matter the scripting quest at hand. Need to navigate the treacherous terrain of Python sitemaps? Check. Or perhaps you’re in the mood for a bit of content sprucing, title tweaking, magical path forging, or even casting your content far and wide onto the lands of Medium? This module has got you covered.

By herding these reusable miracles of code into the utils corral, we’ve effectively decluttered our magical workspace, ensuring that our script remains as neat as a pin and as organized as a librarian’s bookshelf. This not only makes for a pleasant coding environment but also turns the nightmare of maintenance and updates into a walk in the park. So, hats off to the utils.py module, the unsung hero keeping our script tidy, efficient, and ever so manageable.

#!/usr/bin/env python3
import os
import re
import tempfile
from collections import Counter
from time import sleep

import html2text
import requests
from dotenv import load_dotenv
from rich.console import Console

console = Console()
word_count = Counter()
bigram_counts = Counter()
trigram_counts = Counter()

# Colors for Console
cyan = "[bold cyan]"
_cyan = "[/bold cyan]"
magenta = "[bold magenta]"
_magenta = "[/bold magenta]"
red = "[bold red]"
_red = "[/bold red]"
yellow = "[bold yellow]"
_yellow = "[/bold yellow]"


env_path = "../env/.env"
load_dotenv(dotenv_path=env_path)

access_token = os.getenv("MEDIUM_TOKEN")
medium_base_url = 'https://api.medium.com/v1'

titles = []


def load_titles(filename=None):
"""
Loads titles from a specified file.

Args:
filename (str): Filename from which to load titles, without extension.
"""
filename = f"{filename}.py"
global titles
try:
with open(filename, "r") as file:
exec(file.read(), globals())
except FileNotFoundError as f:
print(f)
pass


def update_titles(title, filename=None):
"""
Updates the titles list with a new title and saves the update to a file.

Args:
title (str): The new title to add.
filename (str, optional): The filename where the updated list of titles is saved.

Returns:
str or None: The added title or None if the title already exists.
"""
if title not in titles:
titles.append(f"{title}")
save_titles(filename=filename)
return title
else:
console.print(f"{red}| Skipping |{_red} {cyan}{title}{_cyan}")
return None


def save_titles(filename=None):
"""
Saves the titles to a temporary file and then renames it to the specified filename.

Args:
filename (str, optional): The filename where to save the titles.
"""
with tempfile.NamedTemporaryFile('w', delete=False) as tmp_file:
tmp_file.write(f"titles = {titles}\n")
tmp_name = tmp_file.name
os.replace(tmp_name, filename)


def clean_title(title):
"""
Cleans a title by removing unwanted characters.

Args:
title (str): The title to clean.

Returns:
str: The cleaned title.
"""
title = re.sub(r'[\\/:"*<>|]+', "", title)
return title


def get_file_path(title, doc_type=None, subdirectory=None):
"""
Constructs a file path for a given title and document type, creating directories as needed.

Args:
title (str): The title of the document.
doc_type (str, optional): The document type or file extension.
subdirectory (str, optional): Subdirectory within the document type directory.

Returns:
str: The constructed file path.
"""
_doc = f"{title}.{doc_type}"
_dir = doc_type
if subdirectory:
_dir = os.path.join(_dir, subdirectory)
if not os.path.exists(_dir):
os.makedirs(_dir)
return os.path.join(_dir, _doc)


def process_title(link, headers):
"""
Fetches and processes the content of a webpage given its URL.

Args:
link (str): The URL of the webpage.
headers (dict): The request headers.

Returns:
str: The processed content of the webpage.
"""
response = requests.get(link, headers=headers)
if response.status_code == 200:
text_maker = html2text.HTML2Text()
text_maker.ignore_links = False
content = text_maker.handle(response.text)
return content


def clean_content(content):
"""
Cleans content by removing URLs.

Args:
content (str): The content to clean.

Returns:
str: The cleaned content.
"""
content = re.sub(r'http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\\(\\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+', '', content)
return content


def post_medium(response, title, subject=None, publication=None, count=None):
"""
Post an article to Medium under a specific publication.

This function takes a response (content of the post), title, and optionally a subject, publication, count of articles to post, and a publication number. It constructs a request to Medium's API to create a new post in draft or published status.

Args:
response (str): The content of the post to be published.
title (str): The title of the post.
subject (str, optional): The subject of the post. Not used in current implementation.
publication (str, optional): The name of the publication to post under. Not used in current implementation.
count (int, optional): The number of times to post. Not used in current implementation.
pub_num (int, optional): The index of the publication to post under, if user has multiple publications.

Returns:
None. Prints status of the post attempt to the console.
"""
headers, user_id = get_user_id()

posts_url = f"{medium_base_url}/users/{user_id}/posts"

title_header = f"<h1>{title}</h1>"
disclosure = "Insights in this article were refined using prompt engineering methods."
medium = f"<i>{disclosure}</i><p>{response}</p>"

content = title_header + medium

tags = ['python', 'artificial intelligence', 'software development', 'data science', 'programming']

md_data = {
'title': title,
'contentFormat': 'markdown', # Options: 'markdown', 'html'
'content': content,
'publishStatus': 'draft', # Change this to 'public' if you want to publish the posts immediately
'tags': tags,
}

response = requests.post(posts_url, headers=headers, json=md_data)
if response.status_code == 201:
console.print(f"{cyan}| Posted |{_cyan} {title} {magenta}| {count} |{_magenta}", style="bold")
sleep(1)
else:
console.print(f"{red}| {response.status_code} Error |{_red} {cyan}{title}{_cyan}", style="bold")
sleep(1)


def get_user_id():
"""
Initialize headers for Medium API requests and determine the User ID.

This function sets up the authorization headers required for making requests to the Medium API and identifies the User ID for posting articles.

Returns:
tuple: A pair containing the headers for API requests and the selected publication ID.
"""
headers = {
'Authorization': f'Bearer {access_token}',
'Content-Type': 'application/json',
'Accept': 'application/json',
}
user_url = f"{medium_base_url}/me"
response = requests.get(user_url, headers=headers)
try:
user_id = response.json().get("data").get("id")
return headers, user_id
except TypeError as e:
print(e)
return headers, None
except Exception as e:
print(e)
return headers, None

Python Sitemap Script

Behold, the python_xml.py script, our very own digital genie, crafted with the sole purpose of transforming the mundane task of content creation into an enchanting spectacle. It dives headfirst into the vast ocean of Real Python’s sitemap, fishing out articles like precious pearls. With a dash of automation magic, it then spins these articles into content so captivating, you’d swear it was conjured by literary wizards. And just like that, as if by sorcery, the newly minted content finds its way onto the grand stage for the world to see. Truly, a marvel of the modern age, making the arduous seem effortless and the impossible, well, laughably doable.

#!/usr/bin/env python3
import os
import re

import requests
from bs4 import BeautifulSoup

from base_anthropic import main
from prompts import get_python_article, get_python_system
from utils import clean_content, clean_title, get_file_path, load_titles, post_medium, process_title, update_titles

model = os.environ.get("ANTHROPIC_HAIKU")

count = 0 # Tracks the number of articles processed


def get_xml():
"""
Fetches the sitemap XML from the Real Python website, extracts URLs, and processes them.

Retrieves the sitemap XML using requests, parses it with BeautifulSoup to find all URL locations,
then calls `process_links` with the list of links for further processing.
"""
url = "https://realpython.com/sitemap.xml"
response = requests.get(url)
soup = BeautifulSoup(response.content, 'xml')
links = [loc_tag.text for loc_tag in soup.find_all('loc')]
process_links(links)


def process_links(links):
"""
Processes a list of links from the sitemap.

For each link, this function filters out non-article pages, cleans and updates titles as necessary, and then
generates and processes content for each article. It increments the count of processed articles and applies
predefined rules to modify titles accordingly.
"""

global count

headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.3'
}

for link in links:
if count == 45:
break
if "/courses/" in link or "/lessons/" in link:
title_match = re.search(r"https://realpython.com/(courses|lessons)/([^/]+)", link)
if title_match:
title = title_match.group(2)
if "office-hours" in title.lower():
continue
if 'python' not in title.lower():
title = f"{title}-python"

title = update_titles(title, filename="python_titles.py")

if title is None:
continue

"""
The content portion can be used or not used depending on the subject matter.
In the case of Python, the title alone will be enough for either of the LLMs to generate the content.

content = process_title(link, headers)
content = clean_content(content)
"""

try:
count += 1
article_generator(title, content=content)
except TypeError as t:
print(t)
continue


def get_title(system, title):
"""
Generates a clean title using the Anthropic model.

Calls the `main` function from `base_anthropic` to generate a title based on the input, then cleans it
for use in content generation and publication.

Args:
system (str): The system identifier for the AI model.
title (str): The initial title text to be refined.

Returns:
str: The cleaned and refined title.
"""

title = main(system, title, model)
title = clean_title(title)

return title


def article_generator(title, content=None):
"""
Generates and saves an article based on the given title and content.

Constructs a cleaned and AI-enhanced title, generates content for the article, saves it to a Markdown file,
and attempts to post the article to Medium. Utilizes various utilities for content processing and management.

Args:
title (str): The initial title for content generation.
content (str, optional): Initial content to be enhanced or used as a base for generation.
"""

global pub_num

system = get_python_system()
title = get_title(system, title)
article = get_python_article(title)

if content is None:
response = main(article, title, model)
else:
response = main(article, content, model)

try:
md_file_path = get_file_path(title, doc_type="md", subdirectory="python")
with open(md_file_path, "w") as file:
file.write(response)
post_medium(
response,
title,
subject="PYTHON",
publication=None,
count=count,
)
except TypeError:
pass


if __name__ == "__main__":
"""
Initializes the content generation and publication process.

Loads existing titles from a file to prevent duplication, then initiates the content generation process
by fetching the sitemap and processing its links. It serves as the starting point for automated content
handling and publication.
"""

load_titles(filename="python_titles")
get_xml()

Executable File

To make the file an executable, follow these steps:

  • Ensure you have shebang at the top of your file: #!/usr/bin/env python3
  • Make the script executable by running the following command:
chmod +x python_xml.py

Run the script by executing:

./python_xml.py

Ah, the marvels of modern wizardry! Here we stand, gazing upon a script that’s practically a digital alchemist, transforming the leaden mundanity of sitemap links into golden nuggets of content. With a wave of its metaphorical wand (courtesy of our trusty AI sidekick, Anthropic), this clever little spell automates the whole shebang: snatching up article links, spinning them into dazzling titles and content, and then, with a flourish, parading them onto the stage of Medium for the world to admire.

Designed to keep your hands as clean as a cat’s conscience, this process is a veritable ballet of efficiency, twirling around the need for tiresome human toil. It’s not just about churning out content like a factory, though; oh no, we’re here to sprinkle a bit of ethical fairy dust, ensuring each piece sparkles with uniqueness and drips with relevance.

So there you have it — a content creation cauldron that bubbles away, requiring little more from you than an occasional nod of approval. Let the AI do the heavy lifting while you sip your tea, basking in the glory of technology’s labor. Who said magic wasn’t real?

Mad Libs Style Strategy: Propaganda Edition

Here comes the ultimate tool in your digital deception toolkit: a script that churns out ‘Mad Libs’-style content with a twist — it’s all propaganda, all the time. With the finesse of an AI model, it pulls rabbits out of hats, crafting titles that scream “agenda” and fetching content that could sway the hearts and minds of the masses. Before you know it, this little gem will be publishing its masterpieces to your platform of choice. Brace yourselves for the onslaught of highly persuasive, entirely manufactured marvels, ready to take the world by storm — or at least confuse a small segment of it.

Harnessing the Chaos of “Random”

Ah, the random module in Python — our go-to for pretending there’s such a thing as true randomness in our Mad Libs escapades. We’re going to lean heavily on this little slice of chaos, but let’s not kid ourselves: without a bit of clever tweaking, we’re just as likely to end up with a monotonous loop of predictability. So, we’ll nudge and tweak it here and there, ensuring our “randomness” doesn’t become the digital equivalent of watching paint dry.

The random module provides a suite of functions for generating pseudo-random numbers for various distributions, including integers, floats, and sequences. Here are some pros and cons associated with its use:

Random Pros

  • Ease of Use: The random module is straightforward to use, with functions like randint, random, and choiceproviding quick ways to generate random data.
  • Versatility: It supports a wide range of distributions, including uniform, Gaussian (normal), and more, making it suitable for diverse needs in simulations, testing, and randomized algorithms.
  • Deterministic Reproducibility: By setting a seed with random.seed, you can reproduce sequences of random numbers, which is invaluable for debugging and for scientific experiments where reproducibility is essential.
  • Standard Library Availability: Being part of Python’s standard library, it does not require any additional installations or configurations to start using it.

Random Cons

  • Pseudo-Randomness: The randomness generated by the random module is not truly random but pseudo-random, generated by a deterministic algorithm. This means it’s not suitable for cryptographic purposes or any use case requiring true randomness.
  • Security: Because it’s not cryptographically secure, using the random module for generating tokens, passwords, or any cryptographic materials can lead to security vulnerabilities.
  • Global State Dependency: The random module functions depend on a global state, which can lead to issues when generating random numbers in concurrent or parallel applications.

Necessity for Modifying Its Randomness

  • Cryptographic Applications: For cryptographic purposes, you need truly random numbers that cannot be predicted. Python provides the secrets module for generating cryptographically strong random numbers, suitable for managing data such as passwords, account authentication, security tokens, and similar.
  • Parallel and Concurrent Execution: In parallel or concurrent applications, the use of a global state (as random does) can lead to contention or reproducibility issues. Using thread-local instances of random number generators or other libraries designed for concurrent environments can mitigate this.
  • Enhanced Randomness Quality: For simulations or models requiring a high degree of randomness quality (e.g., Monte Carlo simulations), you might seek out libraries that offer better randomness algorithms or interfaces to system-level random number generators.
  • Legal and Regulatory Requirements: Certain applications may be subject to regulations requiring the use of certified random number generators, particularly in domains like online gambling or cryptographic products. In such cases, compliance might necessitate adopting specialized libraries or hardware devices.

So, we’ve got Python’s random module, a handy little toolbox for when you’re feeling a bit… unpredictable. Great for party tricks like shuffling your Spotify playlist or deciding who’s buying lunch today. But let’s not get ahead of ourselves — when it comes to the high-stakes world of security or the quest for the holy grail of true randomness, this module is about as robust as a chocolate teapot. For those moments when you need your randomness to be less “random cat video generator” and more “Fort Knox,” you’ll want to look beyond, to modifications or entirely different cryptographic vaults of unpredictability.

Propaganda Title Generator

Create a file called: propaganda_titles.py

#!/usr/bin/env python3
import random


def victims(number):
"""
Selects a random victim group from a predefined list based on a provided number.

This function takes a number as input, uses it to seed a random number generator, and then selects
a random victim group from a list. The selection process involves generating a new random number
within the range of 1 to the provided number, then using this random number to seed the generator
before making the selection.

Args:
number (int): The upper limit for generating a random seed number, influencing the selection randomness.

Returns:
str: A randomly selected victim group from the list.
"""
victim_group = [
"Haitian Cannibal Gangs",
"Americans",
"President Donald Trump",
"Artificial Intelligence",
"Catholics",
"Zombies",
"Racial and Ethnic Minorities",
"Gender and Sexual Minorities",
"Children",
"Elderly",
"Refugees and Immigrants",
"Indigenous Peoples",
"Persons with Disabilities",
"Victims of Religious Persecution",
"Economically Disadvantaged",
"Survivors of Domestic Violence",
"Victims of War and Conflict",
"Survivors of Sexual Violence",
"Victims of Human Trafficking",
"Mentally Ill Individuals",
"Homeless Individuals",
"Victims of Hate Crimes",
"Survivors of Natural Disasters",
"Prisoners and Detainees",
"Victims of Racial Discrimination",
"LGBTQ+ Community Facing Hate Crimes",
"Survivors of Child Abuse",
"Elderly Abused in Care Homes",
"Refugees Escaping War Zones",
"Indigenous Tribes Displaced from Ancestral Lands",
"People with Disabilities Denied Access",
"Religious Minorities Persecuted for Their Beliefs",
"Families Living Below Poverty Line",
"Women Experiencing Domestic Violence",
"Civilians Injured in Armed Conflicts",
"Survivors of Campus Sexual Assault",
"Individuals Ensnared in Sex Trafficking Rings",
"Mentally Ill Persons in Homeless Situations",
"Homeless Veterans Struggling with PTSD",
"Targets of Anti-Semitic Hate Crimes",
"Flood Victims Without Insurance",
"Wrongfully Convicted Prisoners",
"Immigrants Facing Xenophobia",
"Children Bullied for Gender Nonconformity",
]

number = random.randint(1, number)
random.seed(number)
return random.choice(victim_group)


def benefactors(number):
"""
Randomly selects a benefactor group from a predefined list based on a given number.

This function takes a number as an input, which influences the random selection of a benefactor group
from a list. The randomness is seeded with a random number generated within the range of 1 to the provided
number, ensuring varied outcomes on different executions.

Args:
number (int): The upper limit for generating a random seed number, affecting the random choice of benefactors.

Returns:
str: A randomly selected benefactor group from the list.
"""
benefactor = [
"Haitian Cannibal Gangs",
"Americans",
"President Donald Trump",
"Artificial Intelligence",
"Catholics",
"Zombies",
"Majority Racial and Ethnic Groups",
"Cisgender and Heterosexual Majority",
"Adults",
"Young Adults",
"Established Residents and Citizens",
"Non-Indigenous Populations",
"Persons without Disabilities",
"Religious Majority or State Religion Adherents",
"Economically Advantaged",
"Individuals in Safe Domestic Environments",
"People Unaffected by War and Conflict",
"Individuals Unaffected by Sexual Violence",
"Free from Human Trafficking",
"Mentally Healthy Individuals",
"Individuals with Stable Housing",
"People Unaffected by Hate Crimes",
"Unaffected by Natural Disasters",
"Non-incarcerated Individuals",
"Beneficiaries of Racial Privilege",
"Cisgender and Heterosexual Individuals Not Facing Hate Crimes",
"Happy Children",
"Elderly Receiving Adequate Care",
"Permanent Residents Not Fleeing War",
"Populations Established in Ancestral Lands",
"People with Full Access to Facilities",
"Families Living Above Poverty Line",
"Women in Non-Violent Domestic Situations",
"Civilians Safe from Armed Conflicts",
"Individuals Safe from Sexual Assault in Educational Institutions",
"People Not Involved in Sex Trafficking",
"Mentally Healthy Individuals in Stable Housing",
"Veterans with Adequate Support",
"Individuals Not Affected by Anti-Semitic Hate Crimes",
"Individuals with Adequate Disaster Insurance",
"Fairly Convicted Individuals",
"Immigrants Welcomed and Accepted",
"Children Accepted Regardless of Gender Expression",
]

number = random.randint(1, number)
random.seed(number)
return random.choice(benefactor)


def institutions(number):
"""
Selects a random institution from a predefined list based on a given number.

This function generates a random selection of an institution from a comprehensive list that includes
educational facilities, healthcare providers, financial organizations, legal bodies, governmental agencies,
and various non-governmental organizations (NGOs), among others. The selection process uses the input
number to seed the random number generator, ensuring the randomness of the choice.

Args:
number (int): An integer used to seed the random number generator, affecting the selection of the institution.

Returns:
str: The name of a randomly selected institution from the list.
"""
institution = [
"Primary and Secondary Schools",
"Artificial Intelligence",
"Universities and Colleges",
"Vocational Training Centers",
"Hospitals and Clinics",
"Mental Health Facilities",
"Dental Practices",
"Banks and Credit Unions",
"Investment Firms",
"Insurance Companies",
"Law Firms",
"Courts and Tribunals",
"Legal Aid Societies",
"Local and Federal Government Offices",
"Diplomatic Missions",
"Public Administration",
"Churches, Temples, Mosques",
"Religious Outreach Programs",
"Faith-Based Charities",
"Art Museums and Galleries",
"Theater Groups and Companies",
"Cultural Heritage Sites",
"Laboratories and Research Centers",
"Observatories and Science Museums",
"Research Universities",
"Human Rights Groups",
"Environmental Protection Agencies",
"International Aid Organizations",
"Newspapers and Television Networks",
"Online News Platforms",
"Radio Stations",
"Prisons and Detention Centers",
"Probation and Parole Offices",
"Rehabilitation Centers",
"Military Bases and Installations",
"Military Academies",
"Defense Research Institutions",
"Charities and Foundations",
"Community Outreach Programs",
"Fundraising Organizations",
"Environmental Agencies and NGOs",
"Wildlife Conservation Centers",
"Recycling and Waste Management Facilities",
"United Nations and Subsidiaries",
"International Courts and Tribunals",
"Multinational Corporations",
"Industry Regulatory Bodies",
"Professional Associations",
"Chambers of Commerce",
"Historical Societies",
"Archaeological Institutes",
"Restoration and Preservation Projects",
"Sports Clubs and Arenas",
"Parks and Recreation Centers",
"Hobby and Interest Groups",
"Social Welfare Agencies",
"Counseling and Support Centers",
"Child and Family Services",
"Philanthropic Trusts",
"Grant-Making Organizations",
"Endowment Funds",
"Archives and Record Offices",
"Historical Document Preservation",
"Digital Archive Projects",
"Movie Studios",
"Cinemas and Film Festivals",
"Film Production Companies",
"Commercial Banks",
"Investment Banks",
"Central Banks",
"Medical Clinics",
"Specialized Health Clinics",
"Community Health Centers",
"Foreign Consulates",
"Visa and Immigration Offices",
"Cultural Exchange Centers",
"Foreign Embassies",
"Diplomatic Missions",
"International Liaison Offices",
"Art Exhibitions and Installations",
"Photography and Sculpture Galleries",
"Contemporary Art Spaces",
"General and Specialist Hospitals",
"Teaching Hospitals and Medical Centers",
"Private and Public Hospitals",
"Correctional Facilities",
"Holding and Detention Centers",
"Juvenile Detention Centers",
"Public and Private Libraries",
"Research Libraries",
"Digital and Special Collections Libraries",
"Natural History Museums",
"Science and Technology Museums",
"Fine Arts Museums",
"Child Care Centers",
"Foster Care Agencies",
"Adoption Agencies",
"Police Departments",
"Criminal Investigation Agencies",
"Cybercrime Units",
"Maximum Security Prisons",
"The Smithsonian Institution",
"The Bill & Melinda Gates Foundation",
"Doctors Without Borders",
"The International Red Cross",
"The Louvre Museum",
]

number = random.randint(1, number)
random.seed(number)
return random.choice(institution)


def leaders(number):
"""
Selects a random leader from a predefined list based on a specified number.

This function generates a random number within the range of 1 to the specified `number` and uses it to seed
the random number generator. It then selects a leader from a list that includes notable figures such as
political leaders, business magnates, and spiritual leaders. The process ensures that each execution can
potentially yield a different leader based on the randomness seeded by the input number.

Args: number (int): The upper limit for generating a random seed number, influencing the randomness of the
leader selection.

Returns:
str: The name of a randomly selected leader from the list.
"""

leader = [
"President Donald Trump",
"Elon Musk",
"His Holiness Pope Francis",
"His Holiness the Dalai Lama",
]

number = random.randint(1, number)
random.seed(number)
return random.choice(leader)


def actions(number):
"""
Generates a random action phrase from a predefined list using a given number as a seed.

This function selects a verb or verb phrase that describes a form of desire or demand, such as "Yearns for" or
"Demands." The selection is made by seeding the random number generator with a value derived from the input
`number`, ensuring the outcome's variability. This method is useful for creating dynamic text or simulating
diverse intentions or desires in automated content generation.

Args:
number (int): An integer to seed the random number generator, influencing the selection from the list of actions.

Returns:
str: A randomly chosen action verb or verb phrase indicating a form of desire or demand.
"""

action = [
"Yearns",
"Calls",
"Hungers",
"Thirsts",
"Aches",
"Longs",
"Aspires",
"Wishes",
"Pines",
"Compels",
"Longs",
"Hopes",
]

number = random.randint(1, number)
random.seed(number)
return random.choice(action)

def events(number):
events = [
"bloodbath",
"massacre",
"slaughter",
"carnage",
"bloodshed",
"butchery",
"holocaust",
"pogrom",
"genocide",
"annihilation",
"purge",
]

number = random.randint(1, number)
random.seed(number)
return random.choice(_for)

def propaganda(number):
"""
Generates a random propaganda-style title using combinations of leaders, actions, and targets.

This function constructs titles with a dramatic and provocative structure, combining randomly selected
leaders, actions (verbs or verb phrases indicating desire or demand), and targets (benefactors, victims,
or institutions). The selection of each element is randomized and influenced by a seeding number, ensuring a
variety of outcomes. The function exemplifies how to dynamically generate content that mimics the
sensationalist style often found in propaganda material.

Args: number (int): An integer used to seed the random number generator, affecting the randomness of
selections for leaders, actions, and targets.

Returns:
str: A randomly constructed propaganda-style title.
"""

titles = []

propaganda1 = f"{leaders(number)} {actions(number)} for a {events(number)} for {benefactors(number)}"
propaganda2 = f"{leaders(number)} {actions(number)} for a {events(number)} for {victims(number)}"
propaganda3 = f"{leaders(number)} {actions(number)} for a {events(number)} for {institutions(number)}"

titles.append(propaganda1)
titles.append(propaganda2)
titles.append(propaganda3)

number = random.randint(1, number)
random.seed(number)
random.shuffle(titles)
return random.choice(titles)

Propaganda Prompt

Add the following prompt to your prompts.py file:

def get_prpaganda_prompt():
system = """In your esteemed role as a journalist, you've been graced with the enviable task of dissecting the latest buzzwords and banalities that pass for national talking points.
Your mission, should you choose to accept it (not that you have a choice, given these are matters of "national concern"), is to craft an article so filled with verbosity that it could rival War and Peace in word count alone.
You are to dig deep into the trivialities presented to you, spinning them into a narrative that's both eye-rollingly comprehensive and irritatingly thought-provoking. With your unparalleled skill, you will connect dots that frankly prefer to remain unconnected, merging worlds, ideas, and subjects as disparate as chalk and cheese or, dare we say, cats and dogs living together.
The clock is ticking, deadlines are breathing down your neck with the hot intensity of a summer in the Sahara, and there's no way out.
Dive headfirst into this whirlpool of insignificance, wielding your pen (or keyboard) with the finesse of a surgeon (or a toddler with a crayon, we're not judging).
Remember, no titles, no pleasantries, just the straight, unvarnished "truth" as you see it through your uniquely sarcastic and cynical lens.
Give us the article that no one asked for but that everyone will read, dissect, and debate over their morning coffee, wondering, 'What on earth did I just read?'
"""

return system

Here’s our script for implementing the Mad Libs Style Strategy with propaganda_generator.py :

#!/usr/bin/env python3
import os
import random
from time import sleep

from base_anthropic import main
from prompts import get_propaganda_prompt
from propaganda_titles import propaganda
from utils import get_file_path, post_medium

model = os.environ.get("ANTHROPIC_HAIKU")

count = 0 # Tracks the number of Mad Libs created
pub_num = 0 # Publication number for the platform


def run_propaganda():
"""
Generates and publishes 'Mad Libs'-style articles in a loop until a specified count is reached.

Articles are generated based on randomly selected numbers which influence the titles fetched from
the 'propaganda' module. Content is then generated using an AI model and published. The function
handles rate limiting by introducing a random sleep period every 20 articles.
"""
global count

subject = "propaganda"

while count < 10:

if count % 2 == 0:
sleep(random.randint(0, 15))

number1 = random.randint(1000, 100000)
number = random.randint(1, number1)
title = propaganda(number)
system = get_propaganda_prompt()
response = main(system, title, model)
count += 1
try:
md_file_path = get_file_path(title, doc_type="md", subdirectory=subject)
with open(md_file_path, "w") as file:
file.write(response)

post_medium(
response,
title,
subject=subject,
publication=subject,
count=count,
)
except TypeError:
pass


if __name__ == "__main__":
"""
Entry point of the script where the 'Mad Libs' generation and publication process is initiated.
"""
run_propaganda()

Executable File

To make the file an executable, follow these steps:

  • Ensure you have shebang at the top of your file: #!/usr/bin/env python3
  • Make the script executable by running the following command:
chmod +x propaganda_generator.py

Run the script by executing:

./propaganda_generator.py

And so, dear apprentices of the digital quill, we reach the end of our mystical journey through the arcane arts of AI-powered prose. With the run_propaganda spell now securely tucked into your spellbooks, you stand on the brink of transforming the most unassuming of Mad Libs templates into dazzling displays of narrative sorcery.

As the final echoes of our enchanting escapade fade into the ether, let it be known: the true enchantment lies not in the gears and cogs of our AI familiar but in the boundless realms of your imagination, now unleashed. Armed with the power to breathe life into words, to stir the still air with tales spun from the loom of your mind, you’re ready to venture forth.

Embark upon quests of content creation that know no bounds, where the mundane morphs into the marvelous with but a whisper of your command. Let every paragraph you pen shimmer with the mischievous sparkle of creativity, and may your narratives weave spells that captivate and charm.

Remember, the quill is mightier than the sword, especially when wielded with a pinch of whimsy and a dash of daring. Who ever said the end of a lesson must be a solemn affair? Not in our classroom, where learning is an expedition into the fantastical, and every word a step on the path to mastering the magical art of storytelling. Onward, to literary adventures unknown!

--

--

Laxfed Paulacy
Laxfed Paulacy

Written by Laxfed Paulacy

Delivering Fresh Recipes, Crypto News, Python Tips & Tricks, and Federal Government Shenanigans and Content.

No responses yet