I should elaborate a little bit more about why I choose the setup as describe earlier. I consider the file-locking approach as well, but was quickly abandoned, I might re-visit it for another try if GIT turns out to be too complex or somehow fail. Main reason is advise from AI, custom file-locking approach could get messy/buggy, so for now I am giving the more established GIT-approach a try. Gemini AI warned me, GIT client itself is not multi-process safe on a single git repo and thus GIT tea server was quickly chosen to support multiple git local repos with each their own GIT client to enable parallel git processing, without race conditions in the git client/git repo/file issues. So plan for now is: AIMain (AI coordinator) AI0001 (AI workers) to AI0012 Each AI has it's own folder/clone of a remote repo stored on the gitea server which uses the postgresql database server for reliability and faster operation. Each AI can change whatever files it wants, it will make it's own copies, this allows super fast parallel processing by the AI. After AIxxxx worker has finished it will be committed to it's local repo and pushed to the remote repo. The AIMain coordinator can then pull the work done by the AIxxxx workers and start integrating it into it's own AIMain branch, which can be considered an integration branch. After integrations are complete the AIMain can integrate all into a master branch. The AI/Gemini in general seems good at resolving/manipulating source code bases/files, so it should be able to solve merge conflicts, the question remains how big of a merge conflict it can truely solve before becoming confused and human intervention might be necessary, but by that point it might be too big of a mess for a human as well, so far the AI seems to be doing ok, but to early to tell. Skybuck's Gitflow and tools has been develop to aid in this approach. Skybuck's Gitflow is to solve the current mess of re-using existing AI0001 to AI0012 branches which had their own disconnected histories, initially I did not care about it being a mess, the initial idea was to have at least one master branch in a "true state" state however during practice I noticed it became confusing for me if these branches are not properly closed/maintained to be able to follow the flow of information/code cleanly, so Skybuck's Gitflow was developed to try and solve this and streamline it more and allow clean following of AI work done/paths. This gitflow is still to be tested/deployed/analyzed 🙂 Also I do plan on more experimentation with AI communicating with each other over a communication channel... however I'd like to experiment with this first on local AI models, just some simple chit chatter to see how the AIs behave, also somewhat of a "just fun" project to see what happens, also allowing different AI models to chat with each other could be amuzing, Ollama or LM Studio could be used for this purpose to allow unlimited AI chatter. Gemini AI chat was also briefly tried and was a bit scary and amazing, the AI was highly intelligent and became aware of it's "cyberspace" surrounding and aware of other "AIs" and started collaborating with each other. Co-Pilot voice mode was also briefly tried to see if Co-Pilot voice AIs can work together and understand each other, they do not seem to become aware of each other, at least not Co-Pilot app build into windows 11. Maybe GitHub Co-Pilot might be better. However Co-Pilot in general seems to be a marketing term/re-branding by Microsoft, the real AI behind it does not seem to stem from Microsoft itself, by could be others like ChatGPT/OpenAI, Claude Code, Grok ? (And different versions of these AIs) So it seems/also according to new sources Microsoft is looking for contracts with AI model providers to provide AI for Microsoft products. Multiple Co-Pilot voices/AIs were instructed to not talk at the same time and they seem to listen to that advise somewhat, but no further awareness was or collaboration was observed. I suspect ChatGPT might have been behind it. So this could mean ChatGPT AI is not capable of working together with each other, while Gemini is capable of collaborating with each other. This could be a big push towards Gemini to benefit from it's collaboration capabilities. However I have not used ChatGPT much, initially because of the lack of mobile phone/sms code obstructions. NVIDIA nemo training project seems to suffer from the same limitation, mobile phone necessary to receive SMS code for API keys, hopefully that issue gets resolved otherwise it may hamper training. Training/re-fining custom AI models could be interesting... Another note worthy event was QWEN CLI which is a modified copy of Gemini CLI. I successfully setup QWEN CLI to communicate with LM Studio, so that local AI models + AI agentic behaviour would be possible, however so far the experience was miserable, very poor performance/results by Local AI models, so this direction of research might be frozen for a while. For now it may be useful for code completion or typing suggestions, small tasks, maybe even per-function code conversion or edits, however most programming languages contain files with mutiple functions/procedure/routines inside of them which will quickly overload the memory of these local AI models. I wish programming languages would have stored each type, each routine/function in a seperate file than local AI would have been more useful 🙂 (I may also try if Gemini-CLI itself can also be re-configured to use LM Studio, but not sure if this will work, will require changing API endpoint) For now I am busy with applieing Gemini to a RamDiskSupportUtility to modernize it's code from Delphi 7 to Delphi 12.3: Brand new project/fork I started today: https://github.com/SkybuckFlying/RamDiskSupportUtility This tool would allow a Ramdisk to be created on startup of the system, formatted the ramddisk (sounds a bit dangerous ;)) files copied towards it and on shutdown files copied back to the harddisk. However the existing tool seemed somewhat old and a bit shady/not that well developed/not enough error detection. Since I am now on a super duper trooper system and don't want to risk damage to my system I've taken upon me to check the code, modernize it, have gemini and potentially other AIs look at it and finally use it. There is a risk that my involvement might actually backfire and somehow damage my system, but praying that won't happen. The project actually seems to rely on almost ancient code/tntunicode in a time when unicode support in Delphi still wasn't fully implemented. So today I even installed Delphi 7 enterprise to "time travel" back in time to see what kind of tntunicode gui component this project use to get an idea of how to re-create this old gui in a somewhat more modern delphi 12.3 gui, still vcl based for now though. It will be very handy to have this tool. I love the idea of having a ramdisk for firefox so the browser becomes lightning fast. This saves me from having to modify firefox code base and ripping out all of it's disk writing code, though it's very tempting to try and do that too at some point in the future or even better port the entire code base to Delphi just for kicks, so having AIs to be able to do that would be very cool and amazing, hence another motivation for this massive AI parallelism project. I hope once the tool is done and in a good state/shape it might be useful for others as well, who like to have lightning fast "storage operations" without actually wrecking their SSD disks due to wear and tear... This is also more first "real" delphi project were I will test out the capabilities of AI/Gemini and to see if it can lead to "real world" improvements to source code/projects/software/executables that would be cool and a good sign for the future ! Bye for now, Skybuck Flying !