The 1st
RolePlay Model
Based on Deepseek-R1 with superpower enhanced role-playing capabilities, novel text generation, and Chain of Thought (CoT) reasoning.
We are proud to present the Tifa-Deepsex model!
NEW: Try DeepSex With Ollama
Tifa-DeepsexV2-7b-Cot-0301-F16NEW
huanlin/Tifa-DeepsexV2-7b-Cot-0301-F16
Tifa-DeepsexV2-7b-Cot-0301-Q4_KMNEW
huanlin/Tifa-DeepsexV2-7b-Cot-0301-Q4_KM
Tifa-DeepsexV2-7b-Cot-0301-Q8NEW
huanlin/Tifa-DeepsexV2-7b-Cot-0301-Q8
Tifa-DeepsexV2-7b-0218-Q4_KM
huanlin/Tifa-DeepsexV2-7b-0218-Q4_KM
Tifa-DeepsexV2-7b-F16
huanlin/Tifa-DeepsexV2-7b-F16
Tifa-DeepsexV2-7b-NoCot-0222-Q4_KM
huanlin/Tifa-DeepsexV2-7b-NoCot-0222-Q4_KM
Tifa-DeepsexV2-7b-Q4_KM
huanlin/Tifa-DeepsexV2-7b-Q4_KM
Tifa-DeepsexV2-7b-0218-F16
huanlin/Tifa-DeepsexV2-7b-0218-F16
Tifa-DeepsexV2-7b-Q8
huanlin/Tifa-DeepsexV2-7b-Q8
Tifa-Deepsex-14b-CoT-Q8
huanlin/Tifa-Deepsex-14b-CoT-Q8
Tifa-Deepsex-14b-CoT-Q4_K_M
huanlin/Tifa-Deepsex-14b-CoT-Q4_K_M
Tifa-Deepsex-14b-CoT-Crazy-Q8
huanlin/Tifa-Deepsex-14b-CoT-Crazy-Q8
Tifa-Deepsex-14b-CoT-Crazy-IQ4_NL
huanlin/Tifa-Deepsex-14b-CoT-Crazy-IQ4_NL
Tifa-Deepsex-14b-CoT-Chat-Q8
huanlin/Tifa-Deepsex-14b-CoT-Chat-Q8
...
Key Features
🤖 Enhanced Role-Playing
Improved character interactions and story generation
Trained on 0.4T tokens of novel content and 100K high-quality role-playing data for immersive character experiences.
🧠Chain of Thought
Advanced reasoning capabilities
Implements sophisticated thought processes for complex problem-solving and storytelling coherence.
📜 128K Context
Extended context window
Supports ultra-long context for maintaining coherence in extended conversations and stories.
Model Versions
Tifa-Deepseek-14b-CoT
Verification model for testing RL reward algorithms
Initial version focused on flexible outputs and research purposes. Includes:
- Base testing implementation
- Uncontrolled but flexible responses
- Research-oriented features
This series has many models, please visit here for details
Try Now!
Tifa-Deepsex-Cot-14B
HuggingFace Space
Technical Specifications
Base Architecture
Deepseek-R1-14B
Max Context
128k tokens
Training Data
0.4T novels + 100K SFT
Hardware
8×H20 GPU cluster
Acknowledgments
- Shanghai Left-North Technology - Algorithms & Computing Power
- Deepseek Team - GRPO Algorithm Sharing
- Qwen Team - Excellent Open Source Foundation
- Shanghai Fudan University
- PRIME Team - Optimization Strategy