【贪心科技】大模型微调实战营-应用篇 - 带源码课件 - 影盘社-网盘资源搜索神器
- file:资料.zip
- file:01 Alpaca.mp4
- file:05 微调Mistral 7B.mp4
- file:02 Self-Attention的分块计算.mp4
- file:03 分块模式中计算O.mp4
- file:04 Mixture of Expert Model.mp4
- file:02 Distributed Computing.mp4
- file:03 ZeRO-123 and FSDP.mp4
- file:02 Quantization01.mp4
- file:01 Prefix Tuning.mp4
- file:04 Quantization Methos for LLM.mp4
- file:02 Transformer Part2.mp4
- file:03 Encoder-based and Decoder Based LLMs.mp4
- file:04 Advanced Topics.mp4
- file:03 大模型是如何炼成的.mp4
- file:01 Optimal Policy.mp4
- file:02 Intro to Monte Carlo.mp4
- file:02 Lora微调-Lora算法.mp4
- file:04 The goal of Agent.mp4
- file:03 Multi-armed Bandit.mp4
- file:02 llama介绍&运行&量化&部署&微调02.mp4
- folder:【贪心科技】大模型微调实战营-应用篇 - 带源码课件
- folder:04 第四周
- folder:01 第四节 2024年3月3日
- folder:01 Alpaca、AdaLoRA、QLoRA
- folder:01 Flash Attention cont、微调Mistral 7B
- folder:01 Distributed Computing、Flash Attention
- folder:01 Prefix Tuning、Quantization
- folder:01 Transformer、Encoder、Advanced
- folder:01 开营+大模型介绍、Transformer
- folder:01 Optimal Policy、Intro to Monte Carlo
- folder:01 大模型微调概览 Lora微调
分享时间 | 2024-09-30 |
---|---|
入库时间 | 2024-09-30 |
状态检测 | 有效 |
资源类型 | QUARK |
分享用户 | 心旷*怡的青蛙 |
资源有问题?点此举报