Zach, this is a fantastic rundown. Your setup perfectly illustrates what I call the "Internet of Smart Things" -- AKA Edge AI -- in action. The key insight here is that every device you picked processes locally and doesn't need cloud permission to function.
Thread is the protocol that really excites me in this space. It's a true mesh that operates at the device level with no cloud dependency -- exactly the architecture that Edge AI needs to scale. Your point about Matter group commands being immature is spot on though. The spec is still catching up to what local-first ecosystems like Home Assistant have been doing for years.
The mmWave presence sensors are a great example of edge intelligence at the sensor level. When you combine that with local facial recognition running on your own hardware, you're building a home that reasons about its environment without ever leaving your LAN. That's the real vision -- devices that are genuinely smart on their own, not just cloud-connected endpoints pretending to be intelligent.
Curious what you're running Home Assistant on hardware-wise. With the ESP32 devices and Frigate for vision processing, the compute requirements start to add up -- which is exactly where dedicated edge AI silicon is heading next.
I was thinking about writing another post on the hardware. It's all on a 5 year old intel desktop, but i have a 4070 gpu attached, I am using about 11.5gb of VRAM out of 12. I was thinking of posting how every bit of vram adds up and whats using it.
Getting a whole smart home in 12gb's can be tough!
Zach, this is a fantastic rundown. Your setup perfectly illustrates what I call the "Internet of Smart Things" -- AKA Edge AI -- in action. The key insight here is that every device you picked processes locally and doesn't need cloud permission to function.
Thread is the protocol that really excites me in this space. It's a true mesh that operates at the device level with no cloud dependency -- exactly the architecture that Edge AI needs to scale. Your point about Matter group commands being immature is spot on though. The spec is still catching up to what local-first ecosystems like Home Assistant have been doing for years.
The mmWave presence sensors are a great example of edge intelligence at the sensor level. When you combine that with local facial recognition running on your own hardware, you're building a home that reasons about its environment without ever leaving your LAN. That's the real vision -- devices that are genuinely smart on their own, not just cloud-connected endpoints pretending to be intelligent.
Curious what you're running Home Assistant on hardware-wise. With the ESP32 devices and Frigate for vision processing, the compute requirements start to add up -- which is exactly where dedicated edge AI silicon is heading next.
I was thinking about writing another post on the hardware. It's all on a 5 year old intel desktop, but i have a 4070 gpu attached, I am using about 11.5gb of VRAM out of 12. I was thinking of posting how every bit of vram adds up and whats using it.
Getting a whole smart home in 12gb's can be tough!