Show whole topic Jun 28, 2025 6:16 pm
Razvan Offline
Approved Member
Registered since: Dec 13, 2012
Location: Florida


Subject: Re: Ai implementation
Replying to myself, 9 years later!

I was recently thinking to play with a Transformer/Attention model for simplified input (not natural language, but robotic control) so further in the thinking process I was mentally designing a test game to train the model. I've remembered MAXR does basically the same, and I've remembered my work on building an AI.

My last question still stands. Do you have a set of recorded games to train the AI from? I find it bad that the idea died because of lack of data.

Also, thinking of our hardware capabilities (at my home, not OpenAI and DeepMind) 10 years ago, Transformer Attention Mechanism was not feasible back then for natural language, but clearly feasible for a reduced set of embedding (or even not using embedding), such as the commands of MAXR. In theory, provided that I would have continued the project here, I could have pushed the state of the art in generative AI about an year earlier.