• The Pieces Post
  • Posts
  • Off-the-shelf copilots are dead. Build yours with Pieces

Off-the-shelf copilots are dead. Build yours with Pieces

How to build a copilot with your own data

๐Ÿ‘‹ Hey Devs,

With Pieces, you can build your own copilot with local data, on-device AI, and full control.

Our SDKs (C# and Python) make it easy to get started.

With support for switching between LLMs and SLMs like GPT-4o, Gemini, and Phi-3, you can choose what works best for your needs, whether cloud-based or running locally on your machine. ๐Ÿ‘€

๐Ÿง What you'll learn:

  • How to build a copilot from scratch

  • How to build a copilot for beginners

  • The difference between copilots and agents

  • How Microsoft Copilot Studio and GitHub Copilot compare

  • How to optimize LLMs with RAG and multi-modal inputs

๐Ÿ“• Why build a copilot with Pieces?

  • Use your own data: Connect context from your files, tasks, discussions, and memory.

  • Run locally or in the cloud: Pair cloud LLMs with on-device nano models for privacy-first performance.

  • Built-in memory: Add Pieces Long-Term Memory for accurate recall.

  • RAG included: Use built-in Retrieval Augmented Generation to keep context focused and cost-efficient.

We even walk you through building a Star Wars-themed copilot using our C# SDK, complete with chat UI, conversation history, context injection, and more.

Share & Earn

Know other developers fighting with context-switching in their AI tools? Join our Ambassador Program to get Pieces swag, participate in giveaways, and even get paid for talking about Pieces!

Ready to give your AI a memory upgrade? โ†’ Download Pieces Now

Happy coding! 

  • The Pieces Team

P.S. Join our Discord community to share your experience with long-term memory, get help from the Pieces team, and connect with other developers revolutionizing their AI workflows.