plexe lets you create machine learning models by describing them in plain language. Simply explain what you want,
and the AI-powered system builds a fully functional model through an automated agentic approach. Also available as a
managed cloud service.
demo.mp4
You can use plexe as a Python library to build and train machine learning models:
importplexe# Define the modelmodel=plexe.Model(
intent="Predict sentiment from news articles",
input_schema={"headline": str, "content": str},
output_schema={"sentiment": str}
)
# Build and train the modelmodel.build(
datasets=[your_dataset],
provider="openai/gpt-4o-mini",
max_iterations=10
)
# Use the modelprediction=model.predict({
"headline": "New breakthrough in renewable energy",
"content": "Scientists announced a major advancement..."
})
# Save for later useplexe.save_model(model, "sentiment-model")
loaded_model=plexe.load_model("sentiment-model.tar.gz")
2.1. 💬 Natural Language Model Definition
Define models using plain English descriptions:
model=plexe.Model(
intent="Predict housing prices based on features like size, location, etc.",
input_schema={"square_feet": int, "bedrooms": int, "location": str},
output_schema={"price": float}
)
2.2. 🤖 Multi-Agent Architecture
The system uses a team of specialized AI agents to:
Analyze your requirements and data
Plan the optimal model solution
Generate and improve model code
Test and evaluate performance
Package the model for deployment
2.3. 🎯 Automated Model Building
Build complete models with a single method call:
model.build(
datasets=[dataset_a, dataset_b],
provider="openai/gpt-4o-mini", # LLM providermax_iterations=10, # Max solutions to exploretimeout=1800# Optional time limit in seconds
)
2.4. 🚀 Distributed Training with Ray
Plexe supports distributed model training and evaluation with Ray for faster parallel processing:
fromplexeimportModel# Optional: Configure Ray cluster address if using remote Ray# from plexe import config# config.ray.address = "ray://10.1.2.3:10001"model=Model(
intent="Predict house prices based on various features",
distributed=True# Enable distributed execution
)
model.build(
datasets=[df],
provider="openai/gpt-4o-mini"
)
Ray distributes your workload across available CPU cores, significantly speeding up model generation and evaluation when exploring multiple model variants.
2.5. 🎲 Data Generation & Schema Inference
Generate synthetic data or infer schemas automatically:
# Generate synthetic datadataset=plexe.DatasetGenerator(
schema={"features": str, "target": int}
)
dataset.generate(500) # Generate 500 samples# Infer schema from intentmodel=plexe.Model(intent="Predict customer churn based on usage patterns")
model.build(provider="openai/gpt-4o-mini") # Schema inferred automatically
2.6. 🌐 Multi-Provider Support
Use your preferred LLM provider, for example:
model.build(provider="openai/gpt-4o-mini") # OpenAImodel.build(provider="anthropic/claude-3-opus") # Anthropicmodel.build(provider="ollama/llama2") # Ollamamodel.build(provider="huggingface/meta-llama/...") # Hugging Face
Plexe should work with most LiteLLM providers, but we actively test only with openai/* and anthropic/*
models. If you encounter issues with other providers, please let us know.
3.1. Installation Options
pip install plexe # Standard installation
pip install plexe[lightweight] # Minimal dependencies
pip install plexe[all] # With deep learning support
# Set your preferred provider's API keyexport OPENAI_API_KEY=<your-key>export ANTHROPIC_API_KEY=<your-key>export GEMINI_API_KEY=<your-key>