Laminate › Use Cases
Real problems Laminate solves
From AI pipelines to healthcare data — see Laminate in action
AI/LLM Response Handling
The Problem
LLM APIs return messy, inconsistent JSON. Anthropic stringifies tool arguments. OpenAI streams fragments across dozens of SSE events. Schema changes arrive without warning. And serde crashes on the first surprise.
The Solution
use laminate::provider::anthropic::parse_anthropic_response;
let response = parse_anthropic_response(&raw_body)?;
let text = response.text();
for tool_call in response.tool_uses() {
let (id, name, input) = tool_call.as_tool_use().unwrap();
let query: String = input.extract("query")?;
}
CSV/Config File Parsing
The Problem
CSV columns are always strings. Environment variables are always strings. Config files mix types freely. Every pipeline builds its own string-to-number conversion, and every one has different edge cases.
The Solution
use laminate::FlexValue;
use laminate::value::SourceHint;
// Tell laminate the data came from CSV — enables string coercion by default
let row = FlexValue::from_json(csv_row_json)?
.with_source_hint(SourceHint::Csv);
let price: f64 = row.extract("price")?; // "29.99" → 29.99
let active: bool = row.extract("active")?; // "true" → true
let count: i64 = row.extract("quantity")?; // "42" → 42
REST API Consumption
The Problem
External APIs change schemas without notice. Optional fields appear and disappear. Types drift — an ID that was always a number suddenly arrives as a string. Your pipeline needs to handle what it gets, not what the docs promise.
The Solution
use laminate::Laminate;
#[derive(Laminate, Debug)]
struct ApiUser {
#[laminate(coerce)]
id: i64, // handles "123" or 123
name: String,
#[laminate(default)]
email: Option<String>, // missing → None
#[laminate(overflow)]
extra: HashMap<String, Value>, // unknown fields captured, not dropped
}
let user = ApiUser::shape_lenient(&api_response)?;
// user.value has your struct; user.diagnostics tells you what was coerced
Data Quality Auditing
The Problem
You need to know what's IN your data before you can trust it. How many nulls? What types does each column actually contain? Does the real data match the documented schema?
The Solution
use laminate::schema::InferredSchema;
let schema = InferredSchema::from_values(&sample_rows);
println!("{}", schema.summary());
// Columns: id(Integer, 100% fill), name(String, 98% fill),
// score(Float/String mixed, 95% fill)
let report = schema.audit(&new_data);
for v in &report.violations {
println!("{}: {:?} at row {}", v.field, v.kind, v.row_index);
}
Healthcare / Lab Data
The Problem
Medical lab values use different units across countries (mg/dL vs mmol/L). Date formats vary wildly (ISO, HL7, FHIR). Identifier validation (NHS numbers, NPIs) is scattered across niche libraries.
The Solution
use laminate::packs::medical::{convert_lab_value, classify_lab_value};
let glucose_si = convert_lab_value(126.0, "glucose", "mg/dL", "mmol/L");
// Some(6.99) — US conventional to SI units
let classification = classify_lab_value(126.0, "glucose", "mg/dL");
// Some(High) — above normal fasting range
Type Detection & Data Profiling
The Problem
You have a column of strings. Some are dates, some are numbers, some are UUIDs, some are credit card numbers. You need to know what they are before you can process them.
The Solution
use laminate::detect::guess_type;
let guesses = guess_type("4111111111111111");
// [CreditCard(0.90), Integer(0.50)] — credit card wins
let guesses = guess_type("2026-04-06T14:30:00Z");
// [Date(Iso8601, 0.85)] — ISO 8601 datetime
let guesses = guess_type("$1,234.56");
// [Currency(0.90)] — US dollar format