AI Plays <Pong> Looking back at my First React Component
I was doing some local archaeology on my laptop, and I discovered a gem from five years ago (just before I started my last job). I wanted to give react a test run before I started using it professionally at work, and this is the first react code that I ever wrote. I toggled the repository to public so that you can check it out and roast my code.
In addition to the initial commit from five years ago, I found some stashed code that seemed like I was experimenting with player 2 (the computer). I moved the uncommitted code to a branch to check it out, and I saw a simple learning "AI" that made the opponents less smart, more beatable, and more fun but increasingly difficult over time. A recommissioned version of the code from that branch is here:
Why put Pong in React?
Is react the correct tool for creating a canvas based Pong game? Probably not! But learning can happen even if you have a bad plan. I got a kick out of adding the following line to a starter react app and then just hacking around with the rest until it worked.
<Pong />
I learned about html canvas and that knowledge became useful in a "proper" react component nearly year later at my job - you never know what you'll pick up if you explore freely. The only real "react-y" thing I used on this first component was some component scope variables like a ref to the canvas:
How does the AI work?
We generate a pool of "mutants" who take turns competing against the player. If they win, they get an individual score bump based on the number of successfully returned volleys and then, after each round, the lowest ranking mutants are killed off, and replacements are made from variations of the winners. Over time, the opponents get better at the game. Each mutant has two parameters (a
and b
) that control their action according to something like the following:
const ai = this.ai[this.iteration];
opponent.pos.y = ball.pos.x * ai.a + ball.pos.y * ai.b;
I created a pull request so you could see only the upgraded opponent code. It's a pretty simple example of a genetic algorithm. The ideal solution would be a=0, b=1
but if you try playing against the opponents from the master branch, you will find that solution not to be very fun.
When losing strategies are replaced, we apply a small amount of randomness to these values and use that to populate the next generation. In this way, winning strategies "survive" over time and losing strategies disappear.
It was certainly fun to build this little project - if you ever wanted to play around with a genetic algorithm or react, open up the stackblitz link above and try hacking around a bit.