X

This site uses cookies and by using the site you are consenting to this. We utilize cookies to optimize our brand’s web presence and website experience. To learn more about cookies, click here to read our privacy statement.

4 Enhancements to Optimize your AI Workflows

As AI and large language models (LLMs) have become more embedded in the corporate world every day, organizations are raising the stakes in terms of size, complexity, and impact of what they are asking AI to do very quickly. While many organizations started with RAG-style chatbots leveraging their own data, many are progressing to more complex AI workflows such as processing complex documents or generating bespoke documents leveraging customer data thousands of times a day. With the tasks presented for AI to solve growing in sophistication, the sophistication of the approach needs to grow proportionally.

Let’s explore four prompting enhancements that you can implement today to improve the quality of your LLM output with even the most sophisticated of tasks. 

1. Break it down

As models have continued to improve their abilities, there is a perception that you can ‘throw’ anything at it and it will magically give the correct result. That may be the case someday, but for now LLMs are much better when you as a human can structure the tasks for the model, either in a sequence of separate LLM calls or even one call. It is a useful analogy think of these LLM calls as an easily distracted human who is very good at performing many small tasks, but with larger ones it can be easily distracted. These small steps allow the model to stay on task and ensure they do not miss anything.

For example, If you have a long document, ask the LLM to first parse the document for the relevant facts. Then, ask the LLM to order the facts into a narrative. Finally, ask the LLM to  produce an output based on the ordered facts.

2. Be as Precise as Possible

Once the AI workflow is broken down into steps, the second most important aspect is being as precise as possible with your instructions to an LLM. While the goal of this article is not to do a deep dive into the rapidly evolving world of prompt engineering, models will always produce at least the same and often better results if the prompt is more precise. Some of the ways you can improve the results are:

  • explicitly describing the input data
  • describing any special terminology that LLMs do not see in their training data, and
  • being as explicit as possible of the output data. A simple thing such as describing the markdown output format or JSON structure you desire can make or break the results.

3. Allow the Model to Plan

Giving an LLM a complex task and asking it to deliver an answer generally doesn’t give the best results. I have heard the analogy that this is like asking a human to give an immediate answer without having a chance to think about the complex task. It is fair to say most humans would not do very well at spitting out an answer immediately – the same applies to an AI model, as well. One way to combat this is to insert a “planning” step for the more intricate parts of any workflow. Use a prompt that states something like, “First, create a detailed plan of how you will..." and your results will almost certainly improve.

4. Allow it to Reflect and Correct

When you adhere to the three methods above, in most cases the output should be dependable. In some situations, such as long, complex workflows, it can help to add one last step in the workflow to reflect on the answer. To continue the human analogies, this is like allowing the LLM to proofread its work. Sometimes no changes are required from the original, but it is an extra level of assurance that requires little effort.

An example reflection prompt might be “based on the goal to achieve x, update your answer to improve your response.”

Adopting these four strategic enhancements to your AI workflows can elevate the efficacy and accuracy of your LLM outputs, even in the most complex scenarios. By breaking down tasks, refining precision in your instructions, integrating planning phases, and incorporating reflective practices, you position your AI to deliver results that are as close to perfection as possible. As AI continues to evolve and integrate more deeply into our work environments, understanding and implementing these principles will ensure your organization remains at the forefront, making the most out of these powerful technologies to drive success and innovation.