I know everyone is talking about ChatGPT right now. One aspect I haven’t seen much of is what it actually looks like to build software using it. There’s a bunch of articles about cool things you can do with it, but what are you going to run into when you build real-world software?
A few days ago I sat down with @jhooks to see what it was like. My idea was to build a synth using the Web Audio API. You can watch the recording below. Early results were impressive; we were able to get noise without knowing anything about the Web Audio API. It was difficult to figure out how to get ChatGPT to work on individual pieces, however, and hook them together.
You can likely make it work better with better prompts; what we really need though is something like this that is aware of your entire project. Imaging telling it to refactor pieces of it, or abstract out a piece of code.
It still worked great though, and it was a fun experiment. We ended up with a working app that lets you control frequency and LFO frequency (see the codesandbox for code).
Watch us here:
And here’s the final app: