Technical Considerations of Delivering Modern On-prem
To be successful at delivering a Modern On-prem application, vendors need to be aware of some technical considerations. Whether a team builds a multi-tenant SaaS application or a traditional on-prem app, there are some key capabilities to develop for delivering Modern On-prem applications.
Cloud Native Architecture
Moving an app to cloud-native is a journey, but the bottom line is that the patterns and primitives at the core of cloud-native architectures are among the biggest enablers of Modern On-prem applications. Read the deep dive on cloud native.
Observability and Monitoring
As a vendor, it’s important to ensure applications are highly operable from an observability standpoint. Many SaaS and on-prem software teams are familiar with these concepts, but delivering Modern On-prem applications means more than just exposing metrics, logs, and traces. When the teams operating the software (the end user) are not experts in operating it, it’s vital to focus on delivering actionable insights.
Managing External Dependencies
When developing an application, leveraging external services is a great way to outsource parts of application development so as to better focus on core competencies. When deciding on which services to leverage, it’s important to make sure they’re both swappable and embeddable.
Deployment Targets
When deploying to an on-prem environment, it is important to understand the differences between target environments. Depending on an end customer’s security needs, there is a broad spectrum of potential environments ranging from internet-connected cloud VPCs to fully airgapped data centers with no outbound connectivity.
Machine Learning & AI
As AI becomes more and more ubiquitous within enterprise applications, vendors will need to decide how to best approach the deployment of these systems within their applications. There will be unique considerations around the delivery and execution of untrained foundation models, pre-trained models, fine-tuning and inference. Ultimately, this is going to depend on if your application is an AI provider (foundation model) or AI consumer (you integrate with trained foundation models).