VIEApps NGX is an open-source, application-level communication runtime platform for distributed systems.
Unlike traditional microservices frameworks that focus on service implementation, VIEApps NGX focuses on runtime-level communication, routing, and execution orchestration.
Built on .NET, it provides a high-performance, message-driven foundation for coordinating, routing, and executing distributed services at scale. Business services can be developed in any programming language, while the platform handles all underlying communication, routing, and execution orchestration in a unified way.
Runtime-Centric Architecture
VIEApps NGX separates:
- service logic (business implementation)
- system operation (communication, routing, execution)
This allows distributed systems to scale and evolve without modifying core service logic.
The system is structured as a runtime ecosystem composed of:
- communication abstractions
- a distributed routing and execution runtime
- independently deployable microservices
Communication Runtime Core
At the center of the system is a communication runtime (implemented via the API Gateway).
It is not a traditional reverse proxy.
It functions as a distributed service bus and RPC routing runtime, responsible for:
- message routing
- service coordination
- load distribution
- execution orchestration
The runtime is built on WAMP (Web Application Messaging Protocol), enabling message-driven RPC and dynamic routing across nodes.
Routed RPC Execution Model
VIEApps NGX is built around a Routed RPC model:
- service invocation is message-driven
- execution is dynamically routed across nodes
- callers are fully decoupled from service instances
Requests are routed in real time to available nodes, enabling:
- load-balanced execution
- elastic horizontal scaling without architectural changes
- high availability through runtime-level coordination
Reactive Execution Engine
The runtime execution model is built on ReactiveX, enabling:
- asynchronous, event-driven execution
- non-blocking distributed pipelines
- high-throughput message processing
This provides efficient handling of high concurrency without thread-blocking bottlenecks.
HTTP Layer as Communication Adapter
HTTP is treated as an external adapter layer, not the core communication mechanism.
VIEApps NGX provides:
- REST API adapter (request-response)
- WebSocket adapter (bidirectional real-time)
- SSE fallback when WebSocket is unavailable
All adapters map into the same internal message-driven runtime, preserving execution consistency regardless of transport.
MCP Layer as AI-Native Adapter
Beyond HTTP, VIEApps NGX exposes business logic directly to AI assistants via Model Context Protocol (MCP).
The runtime automatically maps internal RPC services to MCP tools and resources. Pagination uses AES-encrypted, HMAC-signed cursors per resource to prevent tampering.
- tools/list → discovers available business operations
- tools/call → executes Routed RPC with admission control
- resources/read → streams data with backpressure signals
All MCP requests flow through the same admission control. AI clients are subject to the same admission slots, backpressure, and zero-rejection guarantees as human traffic.
Hardened since Nov 16, 2025, spec-compliant Nov 30, 2025: Initial implementation committed Nov 18, 2025. Updated to MCP specs 2025-11-25 in 5 days. Validated with Postman, then production-proven with Claude Desktop and ChatGPT. No LangChain, no custom plugins. Every existing RPC instantly becomes an AI tool. During the concurrent request spike, AI clients competed for slots alongside browsers. The runtime made no distinction.
Case study - Petrolimex (Nov 30, 2025): ChatGPT connected via MCP to analyze press releases from 2012-2025. Initial attempts failed due to client-side cursor tampering. After implementing AES-encrypted, HMAC-signed cursors, the AI successfully traversed 13 years of data sequentially, extracting and analyzing detailed price adjustment patterns year-over-year. The entire workload ran through the same admission slots as production traffic, with zero rejections.
Implementation note: Built without MCP.SDK, the official SDK's MVC-bound design conflicted with VIEApps NGX's message-driven runtime. A 1-day, specs-pure implementation proved faster and more robust. Battle-hardened against ChatGPT cursor tampering by Nov 30, 2025. As of May 7, 2026, MCP.SDK v1.2.0 has 9M downloads.
Runtime-Level Admission Control
All incoming requests are subject to runtime-level admission control (Router RPC Gate):
- dynamically regulates concurrency
- controls request flow under load
- prevents overload and cascading failures
This ensures system stability under extreme traffic spikes, without relying solely on infrastructure-level scaling.
Production validation: During a national traffic spike on May 7, 2026, the Router RPC Gate handled 3,278 concurrent requests while consuming 36% of its 9,000-slot capacity. Backpressure automatically throttled ingress when Redis reported queue buildup, keeping Redis CPU at 1.3%. Result: 1.15M requests processed in 60 minutes with zero rejections and no infrastructure scaling.
Runtime as the Source of Truth
The communication runtime becomes the system’s central control layer:
- routing decisions
- load balancing
- execution flow
- service coordination
This reduces reliance on:
- service mesh
- external load balancers
- separate RPC layers
How it behaves under load: Instead of ejecting pods or opening circuit breakers, the runtime propagates backpressure signals to clients. During the 3.2K concurrent request spike, Router RPC Gate availability dropped to 5,722 slots and Backpressure reached -42.5, instructing upstream services to slow down. The system recovered in 4 minutes without scaling a single node.
Runtime-Aware Edge Control
VIEApps NGX extends runtime control to the CDN layer through precision cache invalidation.
Cache is not managed by TTL or manual purging.
Cached content is treated as immutable at the edge and only updated when explicitly invalidated by runtime-driven logic.
It is updated in real time, at the URL level, based directly on business logic changes.
This prevents unnecessary traffic spikes and keeps system load stable even under dynamic content updates.
The system operates across both edge CDN and internal runtime caching layers, ensuring efficient load distribution and minimal origin pressure.
Even under transient failures, cached responses can be served to maintain system availability.
Infrastructure Capabilities
- Multi-database support (SQL + NoSQL)
- Distributed caching (L1 + L2)
- Built-in real-time messaging
These capabilities integrate directly with the runtime rather than existing as loosely coupled external components.
Service Ecosystem
Example services include:
- Portals (CMS)
- Users & Identity
- Files
- OTP & Security
- IP Location
- Logs
All services operate as independent nodes within the runtime.
Summary
VIEApps NGX is not an application framework.
It is a communication runtime platform for distributed systems, with a built-in framework and SDK for building microservices.
By moving control from service code and infrastructure layers into a unified runtime, it enables scalable, resilient, and loosely coupled system architectures.
Capacity model: 1,500 admission slots per node, 9,000 per cluster. Battle-tested at 3,278 simultaneous connections with 64% headroom remaining. The runtime measures capacity in slots, not CPU. CPU lies. InFlight doesn't.
AI tools: The runtime natively supports Model Context Protocol, exposing business operations as AI tools with built-in admission control.
