Hopefully it's not just on paper :) There are a lot of rough edges still, but we hope to iron them out as quickly as we can.
One of our core design principles for Cody is to make it "unmagic". Like, the AI is magic enough, but the rest of what we're doing in terms of orchestrating the LLMs in combination with various other data sources and backends should be clear and transparent to the user. This allows for greater understandability and steerability (e.g., if Cody infers the wrong context, maybe you can just tell it the file it should be reading and then regenerate the answer).
Copilot is a great tool, and Oege de Moor, Alex Graveley, and the whole GitHub Next team deserve huge credit for shipping it. That being said, I really want the standard AI coding assistant to be open, and there's been a ton of innovation in LLMs since Copilot's initial launch that doesn't seem to have been rolled in yet. I think this is a case where being open means we can accelerate the pace of innovation.
One of our core design principles for Cody is to make it "unmagic". Like, the AI is magic enough, but the rest of what we're doing in terms of orchestrating the LLMs in combination with various other data sources and backends should be clear and transparent to the user. This allows for greater understandability and steerability (e.g., if Cody infers the wrong context, maybe you can just tell it the file it should be reading and then regenerate the answer).
Copilot is a great tool, and Oege de Moor, Alex Graveley, and the whole GitHub Next team deserve huge credit for shipping it. That being said, I really want the standard AI coding assistant to be open, and there's been a ton of innovation in LLMs since Copilot's initial launch that doesn't seem to have been rolled in yet. I think this is a case where being open means we can accelerate the pace of innovation.