AI-Powered Talking Dragon for Unreal Engine (Voice, Lip Sync, AI Integration)

We are seeking an experienced AI & Unreal Engine developer to enhance our pre-existing dragon character by implementing real-time AI-powered speech, lip sync, and conversational abilities. The dragon will have 4 distinct voices, all speaking in Romanian, and will interact dynamically with users. User-specific data will be managed through our Strapi backend, but voice interactions should be handled via a low-latency AI-powered speech solution.

We are open to any AI solution that ensures fast response times and natural speech-to-speech interactions, including but not limited to OpenAI Realtime API, Rime AI, Deepgram, Azure Speech, or custom local implementations.
Requirements & Deliverables:
1. AI Voice (TTS) – 4 Unique Voices for the Dragon in Romanian
4 distinct AI-generated voices for the same dragon character.
We will provide necessary voice samples for cloning.
Voices should be child-like, playful, and dragon-themed.
Use a TTS engine that supports low-latency, natural speech synthesis. Solutions may include:
OpenAI Realtime API (https://openai.com/index/introducing-the-realtime-api)
ElevenLabs, Azure Custom Neural Voice, Deepgram, Rime AI, or Coqui TTS.
Custom-trained local models if latency is optimized.

2. AI-Powered Real-Time Responses
Integrate an AI-driven conversational model (e.g., GPT-4, Claude, LLaMA, or another LLM) to generate dynamic responses.
Ensure low latency and optimize for streaming support where applicable.
Implement WebRTC, WebSocket, or any other real-time communication method that ensures fast AI responses.
The AI model should be able to fetch and utilize user-specific data (e.g., the user’s dragons) from Strapi backend.

3. Lip Sync & Animation in Unreal Engine
Implement real-time lip sync for our pre-existing dragon character (custom skeletal rig, NOT a MetaHuman).
Use NVIDIA Audio2Face, OVRLipSync, FaceFX, or similar solutions.
Synchronize voice with facial animations.
Add expressive body movements (head tilts, blinking, small idle animations).

4. Integration with Strapi for User Data Management
Implement a method to fetch and utilize user-specific data (e.g., their dragons) from Strapi backend.
Ensure seamless integration between Unreal Engine, the AI speech system, and Strapi.

5. Testing & Final Delivery
Ensure seamless integration of all components.
Provide a fully functional, tested version in Unreal Engine.

Preferred Skills & Technologies:
Unreal Engine (Blueprints or C++) – AI & Animation integration.
AI Voice Generation – Experience with TTS, voice cloning, and real-time AI speech synthesis.
Lip Sync & Facial Animation – Familiarity with NVIDIA Audio2Face, OVRLipSync, FaceFX, or similar tools.
WebRTC/WebSocket Integration – For real-time communication between Unreal Engine and the AI-powered speech system.
Node.js & API Handling – Experience with Strapi or similar headless CMS for user data management.
Offer Requirements:
Fixed price offer for the full project (1 dragon, 4 different voices).
Estimated delivery time (proposed by the freelancer).
Portfolio showcasing relevant work (Unreal Engine AI, TTS, Lip Sync, etc.).
Brief explanation of how you would approach this project, including the AI speech solution you recommend for low-latency, real-time voice responses.
We are looking for a complete, high-quality implementation.
💡 If you have the right expertise and can deliver within a reasonable timeframe and budget, please submit your fixed-price offer!

Sign In

Register

Reset Password

Please enter your username or email address, you will receive a link to create a new password via email.