# CV Repro Lab Skills

> Public ClawHub releases for benchmark-gated CV experimentation, browser validation, and promotion gating.

## Summary
I turned a reproducible CV experimentation workflow into two public, installable ClawHub skills for teams running browser-heavy and GPU-heavy vision work. The releases package experiment records, browser notebook evidence, heartbeat-aware VM execution, review dashboards, and promotion bundles that separate semantic, runtime, and product-surface checks.

## Project Link
https://zack-dev-cm.github.io/projects/cv-repro-lab-skills.md

## Key Features
- Packages benchmark-gated CV experimentation into two public ClawHub skills teams can install and reuse
- Captures reproducible experiment state with run cards, dataset manifests, review dashboards, and redacted public context snapshots
- Validates Colab, Kaggle, and browser-driven CV workflows with browser run cards and per-image validation scorecards
- Adds campaign planning and claim review with contamination checks, rerun policy, and benchmark evidence

## Tech Stack
- ClawHub
- OpenClaw Skills
- Python
- PyTorch
- Computer Vision
- Google Colab
- Kaggle
- MLOps
- Release Engineering

## Benchmarks & Analytics
- Live packages: 2 (data-science-cv-repro-lab + sota-agent)
- Current versions: v1.9.1 / v1.4.1 (ClawHub releases)
- Execution surfaces: 3 (semantic, runtime, and product-surface promotion gates)
- Structured helpers: 29 scripts (manifests, scorecards, summaries, and claim-review tools)

## Links
- [View on GitHub](https://github.com/zack-dev-cm/agentic-cv-repro-lab-skill)
- [Open on ClawHub](https://clawhub.ai/zack-dev-cm/data-science-cv-repro-lab)
