Awesome AIGC Tutorials houses a curated collection of tutorials and resources spanning across Large Language Models, AI Painting, and related fields. Discover in-depth insights and knowledge catered for both beginners and advanced AI enthusiasts.
"AI for Everyone" is a beginner's guide to understanding AI's practical applications, its limitations, and its societal impact, ideal for business professionals and leaders alike.
Wharton Interactive's crash course delves into the mechanics and impacts of LLMs, spotlighting models like OpenAI's ChatGPT4, Microsoft's Bing in Creative Mode, and Google's Bard.
This 12-week Microsoft curriculum dives deep into AI methodologies, spanning symbolic AI to neural networks, while highlighting TensorFlow and PyTorch frameworks, yet omits business applications, classic machine learning, and certain cloud-specific topics.
Co-taught by OpenAI and DeepLearning.AI, this course guides learners in leveraging Large Language Models for tasks like summarizing and text transformation, with hands-on experiences in a Jupyter notebook environment.
Led by experts from OpenAI and DeepLearning.AI, this course teaches automating workflows using language models, creating prompt chains, integrating Python, and designing chatbots, all through hands-on Jupyter notebook exercises with just basic Python knowledge required.
Guided by the creator of LangChain and Andrew Ng, this course dives into advanced LLM techniques like chaining operations and using models as reasoning agents, empowering learners to craft robust applications quickly with foundational Python knowledge.
Delve into Retrieval Augmented Generation and chatbot creation based on document content with LangChain, covering data loading, splitting, embeddings, advanced retrieval techniques, and interactive chatbot building, designed for Python-savvy developers keen on harnessing Large Language Models.
Unlock the potential of Large Language Models like ChatGPT by mastering prompt engineering, transitioning from basic to sophisticated prompts, enabling diverse applications ranging from writing to simulation, suitable for anyone with basic computer skills.
This guide introduces Prompt Engineering, a discipline that optimizes interactions with Large Language Models, offering extensive resources, research, and tools.
Dive into a beginner-friendly guide on Generative AI and Prompt Engineering, offering insights from industry giants, and explore how these tools revolutionize content creation and the future of work.
Explore the transformative world of LangChain, mastering core components, crafting effective prompts, and harnessing advanced AI agents, conversational memories, and custom tools for cutting-edge applications.
Delve deep into prompt engineering, LLM operations, user experience design for language interfaces, augmented language model techniques, foundational LLM insights, hands-on projects, and the future of LLMs, complemented by expert talks from industry leaders on training and agent design.
Learn the techniques of finetuning large language models (LLMs) with Sharon Zhou, gaining expertise in data preparation, training, and updating neural net weights for improved results tailored to your data.
CS 324 delves into foundation models like GPT-3 and DALL-E, covering their principles, systems, ethics, and application, and culminates in a hands-on research project or application design.
This course offers an in-depth exploration of self-supervised learning techniques for NLP, training students to design and implement neural network models using PyTorch, with a focus on various language model architectures.
This course provides a comprehensive insight into Deep Learning for NLP using PyTorch, emphasizing end-to-end neural models, eliminating the need for task-specific feature engineering, and equipping students with the skills to craft their own neural network solutions.
Authored by leading experts in the field, this authoritative text provides an in-depth exploration of the algorithms and mathematical models for modern natural language processing and speech recognition, and is continually updated to reflect the rapid advancements in the NLP domain.
An advanced exploration into the transformative realm of LLMs, discussing state-of-the-art models, their profound capabilities, and associated challenges, with an emphasis on in-depth research, ethical considerations, and hands-on project experience, tailored for seasoned students versed in machine learning and deep NLP frameworks.
Master generative AI in 'How Diffusion Models Work', an intermediate course by Sharon Zhou, where you'll craft diffusion models from scratch, enriched with hands-on coding and labs, ideal for those proficient in Python, Tensorflow, or Pytorch.
The Hugging Face course offers an in-depth look into diffusion models, guiding participants through media generation, hands-on training, and customization using the Diffusers library, with a foundational understanding of Python and Deep Learning essential for the best experience.
This course offers an in-depth exploration of Stable Diffusion algorithms, covering advanced deep learning techniques and hands-on projects using PyTorch, empowering students with expertise in cutting-edge diffusion models.
The Hugging Face Audio course teaches how to use transformers for various audio tasks, from speech recognition to generating speech from text, combining theory with hands-on exercises for learners familiar with deep learning.
An immersive course on spoken language technology, covering dialog systems, deep learning in speech recognition and synthesis, with hands-on projects using modern tools like PyTorch, Alexa Skills Kit, and SpeechBrain, culminating in student-driven research or system design projects.
This course offers an in-depth look at Multimodal Machine Learning, drawing insights from the latest edition of a survey paper and CMU's academic teachings, addressing its unique challenges and future directions.
This course delves into Multimodal Machine Learning (MMML), covering its mathematical foundations, state-of-the-art probabilistic models, and key challenges, while highlighting recent applications and techniques such as multimodal transformers and neuro-symbolic models.
This course explores Multimodal Machine Learning (MMML), covering technical challenges and recent achievements. It emphasizes critical thinking and future research trends, with weekly updates, discussion probes, and research highlights on the course website.
Discover the intricacies of Neural Networks in this highly popular YouTube playlist, seamlessly blending informative graphics with expert teachings, captivating countless students from basics to advanced image classification with Convolutional Neural Networks.
3Blue1Brown unveils the magic of neural networks through vivid animations and clear explanations, diving deep into hand-written digit recognition, the nuances of gradient descent, and the intricate calculus behind backpropagation.
Andrej Karpathy's course guides students from the foundational backpropagation to advanced neural networks like GPT, emphasizing language models as a versatile gateway to mastering deep learning, with prerequisites in Python programming and basic math.
Practical Deep Learning for Coders 2022 is a free course offering hands-on experience in building, training, and deploying deep learning models across various domains using tools like PyTorch and fastai, suitable for those with coding knowledge and without the need for advanced math.
Andrew Ng's Deep Learning Specialization is a top-rated, self-paced program on Coursera with over 1 million learners, offering clear modules and practical techniques in AI, supported by a vast community and breaking down the latest in machine learning into understandable content.
MIT's intensive bootcamp on deep learning fundamentals, covering applications from computer vision to biology, with hands-on TensorFlow practice and a culminating project competition. Basic calculus and linear algebra knowledge required; Python experience beneficial.
Explore the transformative power of transformers in deep learning across diverse domains, from NLP to biology, in a seminar featuring expert lectures, breakthrough discussions, and insights from leading researchers, aiming to foster understanding and cross-collaborative innovation.
DeepMind presents a 12-lecture series on Deep Learning, diving from foundational topics to advanced techniques, encompassing areas from object recognition to responsible AI innovation, all delivered by leading research experts.
DeepMind and UCL present a comprehensive 13-lecture series on modern reinforcement learning, from foundational concepts to advanced deep RL techniques, led by expert researchers Hado van Hasselt, Diana Borsa, and Matteo Hessel.
Delve into the symbiotic relationship between cutting-edge AI applications and the systems supporting them, exploring advancements in hardware, software, and AI-driven optimization techniques, through lectures, discussions, and collaborative hands-on projects.
Explore the foundations of deep learning systems by constructing a complete library, understanding every layer from model design to efficient algorithms, utilizing Python and C/C++.
Master the intricacies of designing robust, scalable, and deployable machine learning systems, focusing on stakeholders, evolving requirements, and holistic system design, while addressing critical issues like privacy, fairness, and security.
Dive into the architecture of modern ML systems, unraveling the journey from high-level model design to low-level kernel execution on heterogeneous hardware, while uncovering the principles and challenges of next-gen ML applications and platforms.
Explore the synergy between systems and machine learning by dissecting recent research on efficient ML hardware/software and applying ML to system design, culminating in hands-on projects and deep discussions for graduate students.
WaytoAGI.com is the most comprehensive Chinese resource hub for AIGC, guiding users on an optimized learning journey to understand and harness the power of AI.