I’d like to run LlamaGPT, but use another PC to run it on. Frigate has a proxy in Home Assistant so that I can use the Frigate in Umbrel.
I’d like to do the reverse of this; to use the Umbrel interface (and to let others do so) but run the actual LlamaGPT operations on another computer (an M1 Mac). For convenience and security, I don’t want to open up the MacOS device and I don’t want to run it as a server.
Is there a possibility of adding something to allow the LlamaGPT to run with Ollama or something that can be run on MacOS (locally)?