Alejandro Córdoba Borja, CEO of Tres Astronautas,, shares his insights on how AI is moving beyond the hype to deliver real impact across legacy industries like manufacturing. From predictive maintenance and smart quality control to AI-augmented supply chains, these technologies are becoming part of the core infrastructure—not just experimental add-ons.

Alejandro highlights what makes an AI solution truly scalable, pointing to modular architectures, cloud-native tools, and continuous model learning. He also explains how AI outperforms traditional BI by enabling proactive, real-time decision-making across operations.

With a strong focus on practical deployment and adoption, his insights reflect valuable lessons from building custom AI systems that drive measurable outcomes.

How will AI transform legacy industries like manufacturing?

AI in manufacturing is already moving from pilot projects to core infrastructure. Here’s how it’s transforming the space:

  • Predictive Maintenance: AI algorithms can monitor equipment performance and predict failures before they happen, reducing downtime and maintenance costs. This is key for Industry 4.0 implementations.
  • Smart Quality Control: Computer vision can inspect products in real-time, identifying defects more accurately than humans.
  • Supply Chain Optimization: AI can process vast data to adjust procurement, manage inventory, and forecast demand with high precision.
  • Human-Augmented Automation: Cobots (collaborative robots) enhanced with AI can assist in tasks requiring precision or learning from human behaviors.

We’ve seen this up close in logistics, where we helped process over 300,000 packages/day by starting with AI-based image recognition using RGB maps — a small step with huge operational implications.

What makes an AI solution scalable?

Scalability in AI depends on both tech and process maturity. Key factors:

  • Modular Architecture: Using microservices and containerized deployments (Kubernetes, Docker) allows independent scaling of components.
  • Cloud-Native Infrastructure: Leveraging platforms like AWS SageMaker, Azure ML, or GCP Vertex AI for elastic compute, auto-scaling, and managed model lifecycle tools.
  • Data Pipelines: Automated, clean, and retrainable data flows are essential. Without them, even the smartest model fails at scale.
  • Retraining and MLOps: A scalable model includes versioning, monitoring, and pipelines for continuous improvement.

How does AI compare to traditional business intelligence (BI)?

FeatureTraditional BIAI/ML Solutions
Data DependencyHistorical dataHistorical + real-time data
InsightsDescriptive & DiagnosticPredictive & Prescriptive
InteractivityDashboards, static queriesDynamic learning models, automation
AdaptabilityManual refreshSelf-learning, retrains with new data
UserAnalysts/ExecutivesIntegrated across operations & systems

AI goes beyond analyzing what happened to suggest what to do next — empowering proactive decision-making.

How will AI impact predictive analytics?

AI takes predictive analytics further by:

  • Handling Complex Data: From structured sales records to unstructured customer reviews and IoT sensor data.
  • Real-Time Learning: AI models adapt to new inputs on the fly — crucial in volatile environments like supply chain or finance.
  • Scenario Simulation: AI enables simulations and “what-if” forecasting under uncertainty, outperforming static statistical models.

This aligns closely with what we offer in terms of custom AI solutions for industries like finance and manufacturing — helping CTOs like “John” modernize decision-making tools.

What strategies do you use for continuous learning and model retraining?

At Tres Astronautas, when working on AI-driven products, we ensure continuous learning through:

  • Automated Retraining Pipelines (MLOps): Triggered by data drift, performance decay, or scheduled intervals.
  • Version Control for Models: Using tools like MLflow or DVC to manage experiments, model versions, and rollbacks.
  • Monitoring & Feedback Loops: Real-time dashboards track model confidence, accuracy, and anomalies.
  • User Behavior Analytics: We embed tracking in the Adopción stage to refine models based on real-world usage.
  • Staged Deployment (Canary/Shadow): To test retrained models in production safely.

What infrastructure or cloud platforms do you typically use for AI deployment?

We recommend and work with platforms depending on the client’s needs and internal stack:

  • AWS: SageMaker for model training/deployment, Lambda for real-time inference, and Step Functions for orchestration.
  • GCP: Vertex AI is a solid choice for enterprises with big data teams and existing BigQuery usage.
  • Azure: Preferred for clients in regulated industries (health, finance) due to compliance-ready offerings.
  • Custom/Nearshore Hybrid: For specific clients, especially those with data privacy constraints, we build containerized deployments managed via Kubernetes clusters — with secure CI/CD pipelines via GitHub Actions, GitLab, or Bitbucket.

We also prioritize cloud-native DevSecOps practices — seen in our work with logistics, fintech, and healthcare platforms.

Related Insights