V-4843:【极客】AI 大模型微调训练营 - 盘搜天堂 - 搜索网盘资源的天堂!
- file:01 20240121评论区记录.xlsx
- file:01 ChatGPT大模型训练技术RLHF.mp4
- file:02 14-混合专家模型(MoEs)技术揭秘.pdf
- file:01 15-大模型分布式训练框架Microsoft DeepSpeed.pdf
- file:03 Meta AI 大模型家族 LLaMA.mp4
- file:01 实战QLoRA微调ChatGLM3-6B.mp4
- file:01 GLM论文.zip
- file:2-大语言模型技术发展与演进.pdf
- file:01 AI大模型四阶技术总览 .mp4
- file:第一周作业参考答案.pdf
- file:4-大模型微调技术揭秘-LoRA.pdf
- file:Fine-tuning论文.zip
- file:UniPELT A Unified Framework for Parameter-Efficient Language Model Tuning.pdf
- file:8-大模型高效微调工具HF PEFT.pdf
- file:7-实战Transformers模型量化.pdf
- file:01 Quantization论文.zip
- file:01 实战基于LangChain和ChatGLM私有化部署聊天机器人.mp4
- file:02 12-实战私有数据微调ChatGLM3.pdf
- file:02 快速入门 LangChain 大模型应用开发框架(上).mp4
- file:01 大模型开发工具库Hugging Face Transformers .mp4
- file:开营直播:大语言模型微调的前沿技术与应用.pdf
- file:Mixtral AI.pdf
- file:Switch Transformers Scaling to Trillion Parameter Models with Simple and Efficient Sparsity.pdf
- file:GLaM Efficient Scaling of Language Models with Mixture-of-Experts.pdf
- file:Mixture-of-Experts with Expert Choice Routing.pdf
- file:Adaptive-mixtures-of-local-experts.pdf
- file:Learning Factored Representations in a Deep Mixture of Experts.pdf
- file:ST-MOE DESIGNING STABLE AND TRANSFERABLE SPARSE EXPERT MODELS.pdf
- file:Proximal Policy Optimization Algorithms.pdf
- file:Learning to summarize from human feedback.pdf
- file:Training language models to follow instructions with human feedback.pdf
- file:LLaMA Open and Efficient Foundation Language Models.pdf
- file:A Survey of Large Language Models.pdf
- file:Llama 2 Open Foundation and Fine-Tuned Chat Models.pdf
- file:ZeRO-Offload Democratizing Billion-Scale Model Training.pdf
- file:ZeRO-Infinity Breaking the GPU Memory Wall for Extreme Scale Deep Learning.pdf
- file:ZeRO Memory Optimizations Toward Training Trillion Parameter Models.pdf
- file:GLM.pdf
- file:GLM-130B v1.pdf
- folder:V-4843:【极客】AI 大模型微调训练营
- folder:08 第七周
- folder:02 MoEs论文
- folder:RLHF
分享时间 | 2024-12-26 |
---|---|
入库时间 | 2025-04-27 |
状态检测 | 有效 |
资源类型 | QUARK |
分享用户 | 开心*薯片 |
资源有问题?点此举报