Catalog Details
CATEGORY
deploymentCREATED BY
UPDATED AT
April 15, 2024VERSION
1.0
What this pattern does:
A batch workload is a process typically designed to have a start and a completion point. You should consider batch workloads on GKE if your architecture involves ingesting, processing, and outputting data instead of using raw data. Areas like machine learning, artificial intelligence, and high performance computing (HPC) feature different kinds of batch workloads, such as offline model training, batched prediction, data analytics, simulation of physical systems, and video processing. By designing containerized batch workloads, you can leverage the following GKE benefits: An open standard, broad community, and managed service. Cost efficiency from effective workload and infrastructure orchestration and specialized compute resources. Isolation and portability of containerization, allowing the use of cloud as overflow capacity while maintaining data security. Availability of burst capacity, followed by rapid scale down of GKE clusters.
Caveats and Consideration:
Ensure proper networking of components for efficient functioning
Compatibility:
Recent Discussions with "meshery" Tag
- Apr 14 | Unable to deploy meshery to minikube
- Apr 12 | What exactly is this sistent design system project
- Nov 11 | Unable setup local Meshery development server
- Apr 10 | How a beginner can start exploring project of meshery?
- Apr 10 | Meshery Development Meeting | April 10th 2024
- Apr 07 | Regarding [Bug]: Connection page shows error in "Local Provider" #10595
- Apr 03 | Meshery Development Meeting | 3rd April 2024
- Apr 02 | Open Request for Comments: Depiction of the Model Relationship Evaluation Cycle
- Mar 28 | Meshery Build and Release | March 28th 2024
- Mar 27 | Meshery Development Meeting | 27th March 2024