Create Auto-GPT with LangChain

Summarize the following. Use a bulleted list for all key point. Do not include introductory material. If code is explained, use speech to text and extract the code into a code block. If you can Identify what models, tokenizers, embeddings or other files mentioned, create a list in the footer.
Title: “Auto-GPT with LangChain 🔥 | Create Your Own Personal AI Assistant 🦾”
Transcript: “hello guys welcome back let’s go through another huge case of land chain I hope you already know what is auto GPT if not it is an experimental open source attempt to make GPT for fully autonomous right within around one month they have 119 000 stars in GitHub and there are many applications being created using Auto GPT there are many ways how you can run auto GPT locally and I have already created a video or you can also follow the documentation which is provided in the GitHub they have explained step by step process how you can get started but in this video I will show you how you can create the auto GPT using link chain let’s get started the code is in my GitHub repository and the link is in the description box first thing first there I have provided the links here as I said before it is based on the auto GPT GitHub repo and we are going to implement this with luncheon right here is the link you can go through this first thing first you need to install the packages right here I have just passed the cell where I’m installing the packages now let’s set up the tools right what what we do to set up the tools as I’m saying here when I install the packages I am going to use the link chain Google results because we are going to use Googles in order to find the results and the files CPU because we are going to use the vector store as files here and open AI of course we are using the open AI the chat models setup tools first you need to go through this link if you haven’t already dedicated the account you need to create the account for the sock API as well as open AI is a platform after you get the API Keys just replace this in this string then you are good to go what we need to do now we need to of course import the necessary classes from the luncheon here I am importing the swap API wrapper from utilities and from the agents I am importing the tool and from the tools. file management write and read I am importing write file tool and read file tool we need to have the agents read something and write something in the file for US automatically right next what I am doing here is there is the search API wrapper and there is the tools variable being created and I am creating the tool here the name is sorts and the function is just the source dot run because we need to run the Google source and the description you can just provide here the the description and after that we are also passing the right tool and the read file tool I can just run this shell for now by the way if you want to know some documentation about something in notebooks it’s really simple in Google collab you can just hover on top of the class you want to have the documentation and you can see here it says open in tab or view Source if you click the view source it will just open on the right side of the screens you can go through here and see what is been implemented in the code itself you know better what is the code doing behind the scenes okay now we set up the tools next we are going to set up the memory the memory is used for the agents in the intermediate steps because we are just letting the agents do the job for us right we are going to use the wrapper around the five Vector database if you want to know what is files I have provided the link here you can go and view what it is here I’ve explained for you like what is embeddings what is index and what is Vector stored and all these things because when we create the memory you need to have the understanding of this because Masons just understand the numbers so first if we pass some string into the model we need to First convert that into the embeddings and we need to store that somewhere right then cops the vector it is the specialized data storage system optimize for storing and retrieving the embeddings and we need to also have the index in short and index is where you load manage and query the data why we need all these things is because when the language model needs to look up and embeddings for a word or phrase it uses the index to find the corresponding embeddings in the vector store that is how it actually works for that what we can do is we need to First import the necessary classes again here I’m importing the in memory dock string from the line chain dot dockstring while importing the files from Vector store of a lang chain and there is the open AI embeddings let’s just run this command now we need to Define our embedding model as I said before we are using the open AI embeddings you can also use hugging face and all the beam buildings and we need to initialize the vector is stored as mtv4 right we import the files first and the invading size is 1536 because open AI embeddings has 1546 diamonds it cannot hold more than that and here is the index as I said before we are using the files dot index flat L2 and then we are passing the embedding size inside this and after that we need to create the vector restore and we are passing all the things here you see the embeddings moral dot embed query we pass that and we also pass the index and we also pass the in memory dock strings now we we have all the necessary pieces together we can go and set up the model and auto GPT by the way it’s not that difficult but you need to know what you are doing step by step right that is the main thing here when you implement something or when you create the application you need to know what comes after each and every step right now we need to set up the model and auto zpt as I said before we are using the chat open AI model from Lang chain experimental we import the auto GPT plus and from Land chain dot chat models we import the chat open AI there are many models but we just want to use the chat open AI and now we need to clear the agent this is the main part right for the agent we say that we are using Auto GPT Dot from llm and tools we are passing the AI name you can pass anything AI role you can also give any things here and tools we are just passing the tools that we created up here right and then we pass the nlm meaning that the last language model in this case we have chart open Ai and the temperature is zero for now and the memory we are going to use the vector store as it says here dot as retrieval that’s it and then we just want to have the agenda same dot power was equal to 2 y varvoice because we want to see what the agent or Auto GPT was behind the scene right now let’s run this shell I have already created one example before but for now what I will do here is I will run the example for you now we need to run the example that’s that simple we created the agent here right and now we have to just run the agent I’m saying here agent. ron recommend 5 movies to watch which is similar to Avengers if I run this it is going to show all the things for us it says that okay entering a new llm chain prompting after formatting and so on we can actually go through this let’s go what is here system you are Tom assistant as we provided the name so your decision must always be made independently without seeking user Assistant Play to your strength as well you see that these are the main things that are passing here and there are the goals right goal is recommend five movies to watch which is similar to Avengers and the constraints here is the constraints also around 4000 old limit for short-term memory your short term memory is shot so immediately save important information to the files because we have the right file and read file right what it will do is it will write into the files and it will read into the files by the way it already completes and let’s go step by step I don’t even need to go through the first example what we did here so let’s go step by step constraints is here and there is the command so you need to search right useful for when you need this we just passed before here is the right file write file to the disk read file read file from the disk and the Finish command is use this to signal that you have finished all the things these are the constraints these are the commands right and the resources now internet access for sources and information gathering long term memory management GPT 3. 5 powered again for delegation of simple task file output the performance evaluation it just says all the things here and now he says you should only respond in Json format as described below you can see here these are the thoughts like thought reasoning plan criticism thought summary to say to user and all these things but we don’t have the speak feature now here implemented you can see how it is being formatted inside and here is the system command write file returned human determine which next command to use and so on right it went through some of this and here’s it says finished chain if we scroll down it says entering new llm chain prompt after formatting system you can Ur term assistant your decisions must always be this okay all the things recommend 5 movie is the same thing here if we scroll little bit down it says finished chain okay command finish response I have completed all my objectives I have completed all my objectives if we now go to this folder there should be a file being created you see here similar movies.txt this is the weather report that I run before but as it went much faster than I anticipated let’s go through the similar movies.text here you can see I just asked five movies and here it gives the Dark Knight the metrics Inception The Avengers Age of Ultron and Captain America the Winter Solder I think it gives quite well right because I said here what was my question let me scroll up here okay I said we similar to Avengers yeah it is similar to Avengers that is how quickly you can create your own Auto gbt but before example just to show you here I have just asked the question right away the report for Helsinki today and the result was here with a report if I double click this I can show you here what it was it provided me that the current weather condition in Helsinki are mostly sunny skies during the morning will give away to mostly cloudy skies in the afternoon High near 50 Fahrenheit winds and humidity is 53. you can see you can why I’m showing you two examples is that you can have your auto GPT configured for your task you can provide any task for it and it will provide the thing for you I just showed you simple examples you get the idea how quickly with Lang chain it is that simple to create Auto GPT by yourself yeah that’s all for this video I just want to show you that with the help of line chain we can create different use cases with few lines of python code I hope you learned something if yes please give thumbs up for the video If you haven’t subscribed please do so it helps me grow my channel and reach a larger audience thank you for watching and see you in the next video”

– Auto-GPT is an experimental open-source project for creating a fully autonomous GPT model.
– It has gained popularity with 119,000 stars on GitHub and various applications being built using Auto-GPT.
– The video tutorial focuses on creating Auto-GPT using LangChain.
– The necessary packages for running Auto-GPT are installed.
– Tools like Google search API, files CPU, and OpenAI are set up.
– Classes from Luncheon are imported, including swap API wrapper, tool, write file tool, and read file tool.
– The search API wrapper and tools variable are created to run Google searches.
– Memory setup is explained, using the five Vector database for storing and retrieving embeddings.
– Necessary classes for memory setup are imported, including in-memory dock string, files from Vector store, and OpenAI embeddings.
– The embedding model is defined using OpenAI embeddings, and the vector store is initialized.
– The model and Auto-GPT are set up, using the chat open AI model from LangChain.
– The agent is cleared and configured with Auto-GPT, including AI name, AI role, tools, language model, temperature, and memory.
– The agent’s power and voice are set, and the example is run by executing the agent.
– The example involves asking the assistant to recommend five movies similar to Avengers.
– The process of running the example is explained, including constraints, commands, and resources.
– The result is written to a file called “similar movies.txt” with the recommended movies.
– Another example of asking the weather report for Helsinki is mentioned.
– The output of the weather report is shown, indicating mostly sunny skies in the morning and mostly cloudy skies in the afternoon.
– The video concludes by emphasizing the ease of creating Auto-GPT with LangChain and encourages viewers to subscribe and give a thumbs up.

Models, Tokenizers, and Embeddings:
– GPT model from Auto-GPT GitHub repository
– OpenAI embeddings