Postgres 18: Major Advances in Performance, Security, and Flexibility For developers choosing the right database is critical in today’s fast-paced, data-centric world. Postgres 18 answers the call for speed, enhanced security, and seamless integration, making it a compe... asynchronous I/O database Kubernetes OAuth open source Postgres 18 security SQL standards
Smarter LLMs: How the vLLM Semantic Router Delivers Fast, Efficient Inference Large language models are evolving rapidly. Instead of simply increasing their size, innovators now focus on maximizing efficiency, reducing latency, and assigning compute resources according to query... enterprise AI Kubernetes latency optimization LLM inference model efficiency open source AI semantic routing
The Model Context Protocol Registry: Building the Backbone for AI Server Discovery The Model Context Protocol Registry is an open, standards-driven catalog and API for MCP servers. If you are building or running AI tools that speak MCP, the registry is the connective tissue that tur... AI registry API DevOps Enterprise Go Kubernetes MCP Open source Package validation Subregistries
How John Lewis Revolutionized Developer Experience with Platform Engineering In 2017, John Lewis , a leading UK retailer, confronted the challenges of its aging monolithic e-commerce platform. Hampered by sluggish release cycles and complex cross-team dependencies, the organiz... developer experience DevOps e-commerce Google Cloud Kubernetes microservices multi-tenant platform engineering
Trivy, Unpacked: One Scanner For Containers, Code, And Clusters Security tooling often splinters by surface area: one product for containers, another for code, another for Kubernetes. Trivy takes the opposite approach. It is a single, open-source scanner that unde... container security CVE Kubernetes SBOM supply chain trivy
Docker Compose Provider Services Are Streamlining Development For years, Docker Compose has been a staple for developers wanting to spin up multi-container environments locally. Now, with the addition of provider services in Docker Compose v2.36.0 the evolution ... cloud integration Compose devops Docker Kubernetes plugins provider services
vLLM Is Transforming High-Performance LLM Deployment Deploying large language models at scale is no small feat, but vLLM is rapidly emerging as a solution for organizations seeking robust, efficient inference engines. Originally developed at UC Berkeley... AI inference GPU optimization Kubernetes large language models memory management model deployment vLLM