loading
loading
loading
In this video, we'll explore Llama-index (previously GPT-index) and how we can use it with the Pinecone vector database for semantic search and retrieval augmentation of LLMs like gpt-3.5-turbo or gpt-4. We'll cover the essentials like Llama Index Documents, Nodes, the vectorstore object, service contexts and storage contexts. š Code Notebook: https://github.com/pinecone-io/examples/blob/master/learn/generation/llama-index/llama-index-intro.ipynb š² Subscribe for Latest Articles and Videos: https://www.pinecone.io/newsletter-signup/ šš¼ AI Consulting: https://aurelio.ai š¾ Discord: https://discord.gg/c5QtDB9RAP Twitter: https://twitter.com/jamescalam LinkedIn: https://www.linkedin.com/in/jamescalam/ 00:00 Getting Started with Llama Index 01:13 Llama Index Features 02:15 Llama Index Code Intro 03:55 Llama Index Document Objects 05:45 Llama Index Nodes 07:23 Indexing with Pinecone 09:22 Vector Store in Llama Index 15:36 Making Queries with Llama Index #artificialintelligence #nlp #openai #generativeai
The Generative AI and Large Language Models (LLMs) course covers everything you need to know about: - Generative AI - Large Language Models (LLMs) - OpenAI, Cohere, Hugging Face - Managed vs. Open Source - LLM Libraries like LangChain and GPT Index - Long-term memory and retrieval-augmentation And more to come...