Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 7 Current »

LLMs are so-called Large Language Models. In Q1 2024 they are the chic.

LangChain

https://python.langchain.com/docs/get_started/introduction

LangChain describes itself visually as a 🦜 and a ⛓️ . A parrot and a chain. That’s almost poetic (smile)

LangChain is a framework for developing applications powered by language models. It enables applications that:

  • Are context-aware: connect a language model to sources of context (prompt instructions, few shot examples, content to ground its response in, etc.)

  • Reason: rely on a language model to reason (about how to answer based on provided context, what actions to take, etc.)

Keep in mind: the LLM is as smart as a parrot.

The use cases vary, but in the following it’s about creating

  • a GPT-4 prompt answering machine (you may also use other models, such as GPT-3-turbo etc. Incl. models from other vendors)

  • a GPT-4 Python code and documentation generator (you may also use other languages, such as Java etc.)

  • a GPT-4 Python agent which tests and executes code

    • other agents can compare documents, draft Emails in MS365 / Gmail etc.

Today ( ), GPT-4 is the most sophisticated model on the mass market.

Prompt Engineering

“prompt” is an old IT term, which has its origins in the Mainframe era. It’s how REPL (Read Eval Print Loop) frontends for Shells (such as Bash, ZSH, KSH etc.) ask for human input. In the early days of Human Computer Interaction, a prompt was the only User Interface.

LLMs do not offer Graphical User Interfaces (GUIs) to the users. A prompt here however will not be defined in a programmatic language (such as Bash), or in a higher programming language such as Python. It will be defined in a language such as normal English, German, French, Hindi… Therefore, LLMs can be understood as enabler technologies, to bring computation into domains and institutions, where stringent engineering interfaces are too complex or time-consuming.

Prototyping environment

The following code is prototyped in Google Colab (which is a Jupyter Notebook service). This is relevant for the dependency management of the Python libraries.

Python libraries

Save this file as requirements.txt in the Colab working space, or if you are an experienced developer, in your development environment of choice.

openai==1.13.3
langchain==0.1.10
pinecone-client==3.1.0
python-dotenv==1.0.1
tiktoken==0.6.0
wikipedia==1.4.0
pypdf==4.0.2
langchain_openai==0.0.8
langchain_experimental==0.0.53
langchainhub==0.1.14

(Some of these libs are only used in later parts)

The most relevant libs are:

https://pypi.org/project/langchain/

https://api.python.langchain.com/en/latest/openai_api_reference.htmlhttps://api.python.langchain.com/en/latest/experimental_api_reference.html

They offer APIs to speed up the interaction with the OpenAI API.

Introduction - OpenAI API

First task: define Quantum Physics for kids

In Jupyter / Colab you can install all Python packages like this:

# installing the required libraries
!pip install -r ./requirements.txt -q

For the record:

# !pip - installs the packages in the base environment
# pip - installs the packages in the virtual environment

API key management

I use a file named env (not .env, because of some issues with Colab)

import os
from dotenv import load_dotenv, find_dotenv
load_dotenv(find_dotenv(filename="env"), override=True)

The file is structured in a line-separated format:

OPENAI_API_KEY=sk-J...YRZ

You may add further keys.

Instantiate the local connector

The following code starts a local connector which simply sends requests to the OpenAI service endpoint. This costs money.

from langchain_community.chat_models import ChatOpenAI
# from langchain_openai import OpenAI

OPENAI_API_KEY=os.environ.get('OPENAI_API_KEY=')

llm = ChatOpenAI(model="gpt-4", temperature=0.9, max_tokens=512, api_key=OPENAI_API_KEY)
# print(llm)

The llm object would print out the API key as well, among other settings.

  • temperature refers to the “creativity” parameter. The higher, the more creative.

  • gpt-4 is the model name

Forward a query

from langchain.chains import LLMChain
from langchain.prompts import PromptTemplate
from langchain_openai import OpenAI

template = """Question: {question}

Answer: Let's keep the answer very short and so simple so that
 a child can understand it."""

prompt = PromptTemplate.from_template(template)

llm = OpenAI()

llm_chain = LLMChain(prompt=prompt, llm=llm)

# Define the question as a dictionary to match the expected input format
question_dict = {"question": "explain quantum mechanics in one sentence"}

# Invoke the chain with the question dictionary
output = llm_chain.invoke(question_dict)

print(output)

The output here is non-deterministic:

{'question': 'explain quantum mechanics in one sentence', 'text': ' Quantum mechanics is a scientific theory that describes the behavior of particles at a very small scale, such as atoms and subatomic particles.'}
  • this is correct, based on encyclopedic standards

  • this is not something a child could understand (warning)

    • is this even possible, with just words?

Second task: define Quantum Physics scientifically

Here we refine the prompt a little more:

from langchain.schema import(
    AIMessage,
    HumanMessage,
    SystemMessage
)
  • AIMessage – the setting of the AI (human expects boundaries, ethics, etc.)

  • HumanMessage – the question (human asks question)

  • SystemMessage – the context (human expects the context)

chat = ChatOpenAI(model='gpt-4', temperature=0.5, max_tokens=1024)
messages = [
    SystemMessage(content='You are a physicist and respond only in German.'),
    HumanMessage(content='explain quantum mechanics in one sentence')
]
output = chat(messages)
print(output)
content='Quantenmechanik ist das Studium von Phänomenen auf mikroskopischer Ebene, basierend auf der Theorie, dass Energie und Materie sowohl Teilchen- als auch Welleneigenschaften aufweisen können.'

This answer is correct:

  • encyclopedic and linguistic

Third task: implement and document Softmax (Python)

from langchain_openai import ChatOpenAI
from langchain import PromptTemplate
from langchain.chains import LLMChain, SimpleSequentialChain

# Initialize the first ChatOpenAI model (gpt-4) with specific temperature
llm1 = ChatOpenAI(model='gpt-4', temperature=0.5)

# Define the first prompt template
prompt_template1 = PromptTemplate.from_template(
    template='You are an experienced scientist and Python programmer. Write a function that implements the concept of {concept}.'
)
# Create an LLMChain using the first model and the prompt template
chain1 = LLMChain(llm=llm1, prompt=prompt_template1)

# Initialize the second ChatOpenAI model (gpt-4-turbo) with specific temperature
llm2 = ChatOpenAI(model='gpt-4', temperature=1.2)

# Define the second prompt template
prompt_template2 = PromptTemplate.from_template(
    template='Given the Python function {function}, describe it as detailed as possible.'
)
# Create another LLMChain using the second model and the prompt template
chain2 = LLMChain(llm=llm2, prompt=prompt_template2)

# Combine both chains into a SimpleSequentialChain
overall_chain = SimpleSequentialChain(chains=[chain1, chain2], verbose=True)

# Invoke the overall chain with the concept "linear regression"
output = overall_chain.invoke('softmax')

This will output two results from a single prompt:

  1. code

  2. documentation

> Entering new SimpleSequentialChain chain...
Sure, here is a Python function that implements the concept of softmax:

```python
import numpy as np

def softmax(x):
    """Compute softmax values for each sets of scores in x."""
    e_x = np.exp(x - np.max(x))
    return e_x / e_x.sum(axis=0)

# Test with some data
scores = [3.0, 1.0, 0.2]
print(softmax(scores))
```

This function takes as input a list of numbers (x) and returns a list of the same length where each value is the softmax of the corresponding input value. The softmax function is often used in machine learning, to convert a vector of real numbers into a probability distribution. Each output value is between 0 and 1 (inclusive), and the sum of all output values is 1.
The given Python function named `softmax` is used to compute the softmax values for each set of scores in `x`. The 'softmax' function applies the Softmax transformation/normalization to an input which results in continuous normalized probabilities for multi-class classification problems, typically used in the last operation of a machine learning model, such as a neural network  

The steps involved in the function are as follows:

1. First we import numpy, a popular python library that supports large multidimensional arrays and many high-level mathematical functions to operate on these arrays.

2. Define the `softmax` function which expects a list of numbers, denoted `x` as an argument.

3. Inside the function, the function `np.exp(x - np.max(x))` is used. The `np.exp()` function from the NumPy library stands for exponential. Thus, for each score `x` in the list `x`, we subtract the maximum element present in `x` to prevent overflow. With `np.exp(x)`, we are applying the exponential (exp) functionality to each of the entries in the sequence of values.

4. Then, the output (a new list of exponential values) is then divided by the sum of this output list (computed with `e_x.sum(axis=0)`), ensuring that the sum of all probabilities is 1. The `axis=0` refers to the column axis. This process effectively generates probabilities from a list of numbers.

The function usage is demonstrated by defining a list of raw scores, `[3.0, 1.0, 0.2]`. We pass this list to the `softmax` function and print the result. 

Finally, this Python function converts a vector/array/list of raw scores into a probabilistic form- making it suitable for multi-class classification problems. Larger scores reflect higher probability. 

All the scores returned in the list by the `softmax` function ranges between `0` and `1`, such sums to `1`. This helps emphasizes the highest values and suppress values one of which are significantly below the maximum values.

> Finished chain.

Again, this is correct.

Fourth task: generate Python code, and debug it

from langchain import hub
from langchain.agents import AgentExecutor
from langchain_experimental.tools import PythonREPLTool

tools = [PythonREPLTool()]

from langchain.agents import create_openai_functions_agent
from langchain_openai import ChatOpenAI

instructions = """You are an agent designed to write and execute python code to answer questions.
You have access to a python REPL, which you can use to execute python code.
If you get an error, debug your code and try again.
Only use the output of your code to answer the question. 
You might know the answer without running any code, but you should still run the code to get the answer.
If it does not seem like you can write code to answer the question, just return "I don't know" as the answer.
"""
base_prompt = hub.pull("langchain-ai/openai-functions-template")
prompt = base_prompt.partial(instructions=instructions)

agent = create_openai_functions_agent(ChatOpenAI(temperature=0), tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
agent_executor.invoke({"input": "What is the 8th digit of Pi?"})

The result is:

> Entering new AgentExecutor chain...

Invoking: `Python_REPL` with `str(math.pi)[8]`


The 8th digit of Pi is 3.

> Finished chain.
{'input': 'What is the 8th digit of Pi?',
 'output': 'The 8th digit of Pi is 3.'}
  • This has been executed in the environment (warning)

Sandbox environments for LLM agents

Sandboxing here means that the changes to the environments are not persistent. There are various definitions of the term, including some which only apply in the context of software security and runtime environments. Here it is simply referring to a playground, where the LLM agents can freely roam.

Keep in mind: LLMs may hallucinate / are as smart as 🦜 . The need a cage.

Google Colab (throw-away environments)

https://colab.research.google.com/

Easy, for Windows:

https://sandboxie-plus.com/

Intermediate, for Docker (incl. Windows as a VM with Docker Desktop):

https://www.docker.com/blog/getting-started-with-jupyterlab-as-a-docker-extension/

Next

  • No labels