Loading...
Notecard Disconnected
Having trouble connecting?

Try changing your USB cable as some cables do not support transferring data. If that does not solve your problem, contact us at support@blues.com and we will get you set up with another tool to communicate with the Notecard.

Advanced Usage

The help command gives more info.

Connect a Notecard
Use USB to connect and start issuing requests from the browser.
Try Notecard Simulator
Experiment with the Notecard's API on a Simulator assigned to your free Notehub account.

Don't have an account? Sign up

Meet Notecarrier CX - A Combined Notecarrier + STM32L4 Host MCU in One Board

Blues Developers
What’s New
Resources
Blog
Technical articles for developers
Connected Product Guidebook
In-depth guides for connected product development
Developer Certification
Get certified on wireless connectivity with Blues
Newsletter
The monthly Blues developer newsletter
Terminal
Connect to a Notecard in your browser
Webinars
Listing of Blues technical webinars
Blues.comNotehub.io
Shop
Docs
Button IconHelp
Support DocsNotehub StatusVisit our Forum
Button IconSign In
What’s New
Resources
Blog
Technical articles for developers
Connected Product Guidebook
In-depth guides for connected product development
Developer Certification
Get certified on wireless connectivity with Blues
Newsletter
The monthly Blues developer newsletter
Terminal
Connect to a Notecard in your browser
Webinars
Listing of Blues technical webinars
Blues.comNotehub.io
Shop
Docs
homechevron_rightConnected Product Guidebookchevron_rightPilot Planning Guide and Checklist
verified
Validate

Pilot Planning Guide and Checklist

This guide is a pragmatic, end-to-end playbook for designing, running, and evaluating a smart connected product pilot. Following this guide can help you de-risk your launch, validate assumptions, and demonstrate value to stakeholders long before committing to scale.

note

Key Takeaways

  • A pilot validates that your product works in real environments with real customers (it's not an extended prototype!).
  • A pilot also proves your business case and can deliver the proof points and information you need to go to market if you are planning on selling your smart connected product as a commercial device or service.
  • Deploy 10 or more devices in production-like conditions to stress-test hardware, firmware, and operations.
  • Define success metrics and exit criteria before deploying the first device.
  • The goal is a "go/no-go" decision with confidence: either scale to production or iterate.

In This Guide

This guide is structured as five sequential phases, each building toward a confident scale decision:

  • Phase 1: Define What This Pilot Must Prove: Clarify the value you intend to demonstrate and what success means before deploying a single device.
  • Phase 2: Confirm You Are Ready: Ensure your hardware, firmware, cloud architecture, and operations are mature enough for a production-like deployment.
  • Phase 3: Design the Pilot Intentionally: Define success metrics, select the right customers, set scope, assign roles, and identify risks.
  • Phase 4: Execute and Measure: Deploy in waves, instrument everything, validate real-world performance, and monitor continuously.
  • Phase 5: Decide and Scale: Review results against predefined thresholds and determine whether to scale, iterate, or stop.

Phase 1: Define What This Pilot Must Prove

Understand the Value of Running a Pilot

A pilot is a time‑boxed, production‑like exercise for a smart connected product with a small cohort of real users, devices, and/or generated data that answers these questions:

  1. Does the solution work in the field? (i.e. does it provide technical and operational fit)
  2. Is it adding value to my pilot customer(s)? (i.e. does it provide economic value for the end user)

When done right, a successful pilot provides:

  1. Product design, firmware engineering, and cloud engineering roles the confidence to "freeze" the product hardware, firmware, and cloud code.
  2. Operations a repeatable installation and preliminary support motion.
  3. End customers with a clear value prop and proven ROI.

However, a pilot is not proof of product success at scale.

A pilot will almost always cost more per device, per installation, and per support interaction than production. That's expected. The goal is to prove viability, not optimize economics. Keep this in mind:

  1. Pilot hardware costs are artificially high. Small-batch builds will not reflect production unit economics. Do not treat your pilot BOM as your long-term margin.
  2. Pilot cost data is directional. Early support, data usage, and cloud costs will be noisy. Use them to validate your model's assumptions, not define the final model.
  3. Pilot operations are not optimized. Installs take longer. Support is more hands-on. The pilot proves the motion works, not that it is efficient.
  4. A few customers validate value, not the market. A small cohort can confirm the product delivers outcomes, but it cannot finalize pricing or market size.

Set these expectations upfront with your economic buyer.

A pilot is successful if it proves the product works and delivers value, even if unit economics are not yet at production targets.

Define Your Value Creation and Value Capture Hypotheses

A successful pilot must do more than prove that the product works. It must test a clear point of view about how value is created and how value is captured.

Value creation answers the question: what measurable improvement does this solution deliver? This may include reduced downtime, fewer truck rolls, improved asset utilization, lower energy costs, or increased revenue per unit.

Value capture answers a different question: how does your company participate in that improvement economically? Through subscription revenue, hardware margin, data services, performance-based pricing, or bundled offerings.

Before launching your pilot, explicitly define:

  • The primary value driver you are targeting
  • How that value will be measured during the pilot
  • Who within the customer organization experiences the value
  • Who approves budget based on that value
  • How your intended pricing model aligns to that value

A pilot that demonstrates operational improvement but does not validate willingness to pay leaves commercial risk unresolved.

During the pilot, test both sides of the equation. Measure the economic impact for the customer, and evaluate whether your pricing model reflects that impact credibly and competitively.

For example:

If you claim to reduce equipment downtime by 25%, quantify the financial impact of that reduction. If your pricing model is subscription-based, confirm that the annual savings meaningfully exceed the subscription cost.

If you claim to prevent losses or safety incidents, document avoided costs and validate that decision-makers recognize that impact as budget-worthy.

The goal is not to finalize pricing during the pilot. The goal is to confirm that value creation and value capture are logically connected.

Without this validation, you may scale a technically sound solution that struggles commercially.

note

This rest of this guide assumes you are integrating Blues Notecard and Notehub into your connected product.

Phase 2: Confirm You Are Ready

Pilot Prerequisites

At the product level, you need to have already defined a clear problem statement, a target segment/persona, and a value hypothesis (e.g. "what outcome is this device supposed to change, and for whom?"). Your pilot isn't meant to uncover this - instead, it answers the questions: will it work, and will it deliver the value you want it to (whether that's generating revenue, saving cost, reducing operational expenses, enabling a new line of business, or something else)?

If you haven't already, review our guide on Making the Business Case to learn how to build a clear, credible internal case that a connected product delivers real business value.

At the hardware/firmware level, you should have a working product with a functional Notecard integration on a representative hardware platform (i.e. something more significant than a breadboard project), a stable power design, the right antennas/enclosure assumptions, and a firmware image that already performs your core functionality. You should also have at least a draft DFU strategy that you are prepared to exercise during pilot deployments.

At the cloud/data level, Notehub should already be connected to your downstream cloud application(s) with a defined schema, routing rules working end‑to‑end, and a place where your pilot team can actually see data (e.g. a cloud dashboard, logs, or BI - and ideally begin integrating that data directly into operational workflows and decision-making processes). Pilots fail when field devices are ready but data has nowhere business‑friendly to land.

At the operations level, you should have a draft installation guide, an identified on-site primary contact, a basic data usage model if using cellular (e.g. KB/device/day with buffer), and a model for supporting and obtaining feedback from pilot customers.

At the commercial level, you'll need a clear plan for pricing, packaging, and sales channels. Finally, you should have stakeholder alignment on the decision you want the pilot to enable (e.g. "green‑light a paid customer rollout in Q2") and a clear view of the supporting data you need to obtain from the pilot in order to make that decision and move into production.

If these prerequisites are not met, pause and address them before moving forward with a connected product pilot.

Pre-Pilot Readiness Checklist

  • Pilot Charter approved (define scope, deployment sites, and timelines)
  • Success metrics and thresholds baselined, dashboards defined
  • Devices ready with consistent firmware version and DFU plan in place
  • Provisioning tested end‑to‑end (activation → fleet assignment → data pipeline validated)
  • Installer kit (if needed): video, one‑pager, checklists; training complete for installers
  • Support model in place with initial support runbooks ready
  • Pilot agreements signed with initial customers

Confirm Operational Viability

A product that works technically but cannot be installed or supported predictably is not ready to scale. Confirm that:

  • Installation is repeatable
  • Remote resolution is viable
  • Support load is manageable
  • Fleet segmentation and monitoring processes function cleanly

Scaling multiplies operational weaknesses.

Phase 3: Design the Pilot Intentionally

Design Principles for a Successful Smart Connected Product Pilot

The pilot deployment should be representative, but not necessarily exhaustive. It should include some number (ideally at least 10) of devices that stress power requirements, RF signal quality, expected installation conditions, and analyze end user UX. If the physical environments could vary greatly, ideally you would deploy a set of devices in each distinct environment.

Make sure the deployments are as "production‑like" as possible. Meaning real activation, claiming/provisioning, telemetry, and data routing through Notehub to any end cloud applications.

Understand and document your customer's pilot experience. You may be piloting a product, but they're piloting a brand-new workflow, often one that will impact their own end users. Use their language, their terminology, and their frame of reference when describing the pilot. Focus on the value their solution delivers: the business problems it resolves, the efficiency gains it unlocks, and the cost savings it creates. Guide them toward a clear vision of impact that goes far beyond the technical mechanics. When defining success criteria for your customer's end customer, avoid mentioning Blues entirely. This pushes the conversation toward the outcomes and benefits of the full solution, rather than the components or underlying technology.

Every device must be instrumented by default. Every critical feature should emit a metric (e.g. time‑to‑first‑data, events expected vs received, cellular data usage expected vs actual). At a high-level, and map every metric to a strategic/business requirement. Next, decide ahead of time what your success metrics. Commit to defining what a successful pilot means for you, before the deployments start and decide ahead of time who owns the final "go vs no-go" decision.

Choose Your Success Metrics

From the previous step, define your success metrics prior to the pilot deployments. These are just some ideas to get you thinking about the success metrics that may be critical for your pilot.

Technical Metrics and Device Health

  • Time‑to‑first‑data (e.g. x minutes after activation).
  • Reliability of event syncing (e.g. 99% of expected cadence).
  • Data completeness (e.g. 98% of data with no missing data elements).
  • Cellular signal within target (e.g. 2+ bars) with no devices in a penalty box.
  • Battery life projection (e.g. x months with expected usage) based on pilot performance.
  • Over-the-air update (DFU) success rate (e.g. 99% with no bricked devices) and time-to-update relative to expectations.

Operational & Support

  • First‑try install success (e.g. 80% with learnings quickly adapted).
  • Remote resolution rate of issues (e.g. 50% of issues resolved without field visit).
  • CSAT (of both the installation experience and operational performance)

Commercial & Outcome

note

Always connect these metrics back to the business case that justified the project in the first place!

  • Device activation rate beyond the first week (e.g. 90%+ active after 30 days, proving sustained value).
  • Pilot customer NPS and willingness to convert to a paid deployment.
  • Measurable impact on your target value driver (e.g. reduced downtime, fewer truck rolls, energy savings, safety incidents prevented, or losses detected).
  • Per-device unit economics trending toward your cost model, confirming profitability at production volumes.

End Customer Functionality

  • Define what your customer hopes to achieve with this pilot in their terms (e.g. "reduce equipment downtime by 25%"), and set success metrics accordingly.

Define Roles via a RACI Matrix

If you haven't used a RACI matrix before, it's a simple project management tool that clarifies who does what. For each task or deliverable, you assign stakeholders one of four roles:

  • R (Responsible) is the person who does the work.
  • A (Accountable) is the person who makes the final decision and is answerable for the outcome.
  • C (Consulted) identifies people whose input is needed before a decision.
  • I (Informed) covers people who need to know the outcome but don't need to provide input.

Defining these roles upfront prevents confusion during your pilot, especially when issues arise and you need quick decisions.

Here's an example RACI matrix for a smart connected product pilot that you can adapt to your team.

Roles

The roles referenced in the matrix below are:

  • EB (Economic Buyer)
  • PO (Product Owner)
  • PD (Product Design for HW/System)
  • FE (Firmware Engineering)
  • CE (Cloud Engineering)
  • FS (Field Install & Support)
  • LC (Legal/Compliance)

Pre-Pilot Planning RACI Matrix Example

Deliverable / DecisionEBPOPDFECEFSLC
Business case (why now, why us)ARCCCCC
ROI model + success criteria (what "wins" looks like)ARCCCCC
Pilot scope (sites, users, duration, constraints)IA/RCCCCC
Pilot plan + timeline (milestones, dependencies)IA/RCCCCC
Risk register (top risks + mitigations)IA/RCCCCC
Pilot budget + procurement planARCIICC
Device architecture + BOM (including sensors, carrier, power)IARCICC
Firmware architecture + key behaviors (sync cadence, power states, OTA strategy)IACRCII
Cloud app architecture (ingest, dashboards, alerts, integrations)IAICRCI
Data pipeline design (routing, storage, schema, access)IAICRIC
Deployment runbook (install steps, provisioning, Notehub project setup)IACCCRI
Site readiness checklist (power, mounting, network, access, safety)IACIIRI
Support runbook (triage, escalation, replacements, metrics)IAICCRI
Device security model (identity, keys, attack surface, secure update posture)IARRCIC
Data security model (PII, encryption, retention, access controls)IACCRIC
Certification / regulatory plan (RED/FCC/IC, safety, labeling, docs)ICRIIIA
Go/No-Go: pilot launch (readiness decision)IACCCCC
Go/No-Go: scale commitment (post-pilot investment decision)ARCCCCC
Post-pilot evaluation report (results, lessons learned, scale recommendation)IA/RCCCCC
note

Your device security model should leverage Notecard's built-in hardware security (HSM via STSAFE, TLS, private cellular). Be sure to review Blues Security, Reliability, and Governance to ensure your pilot architecture meets enterprise security requirements.

Keep Your Economic Buyer Top-of-Mind

Your economic buyer (the person who controls the budget and makes the final go/no-go decision) cares less about technical elegance and more about whether the numbers work. They're evaluating whether the risks are manageable and the path to scale is clear.

Lead with a concise unit‑economics story: spell out the recurring costs per device (hardware, event transfer, data storage, cloud compute, and expected support) and one‑time costs (BOM costs, certifications, and installations). Translate your pilot's operational improvements into money—tie every core metric to a line in the P&L so the "so what?" is obvious.

Address risk straightforwardly and don't bury it. Keep a short risk list that names the risk, the current likelihood/impact, and precisely what you're doing about it. For example, RF dead‑zones and antenna alternatives, battery margins and measured duty cycles, supply and certification timelines, your security posture, BOM changes, and privacy/data residency choices. Show what you will do if things go wrong in terms of rollback plans, RMA handling, and who carries which operational burden (consult the RACI matrix!).

Complete the picture with a scale plan that proves you can go from pilot to production without re‑inventing the wheel. Demonstrate commercial readiness with a pricing model that one or more pilot customers have already accepted.

The message to the economic buyer should be simple: the value is real, the risks are contained, and the path to scale is ready.

Selecting the Right Pilot Customers

Not every customer is a good pilot candidate. Choosing the wrong pilot customers can doom an otherwise solid product—either by generating misleading data or by creating a poor first impression that's hard to recover from.

Good pilot customers share several characteristics. They're engaged and communicative, providing honest feedback and flagging issues quickly. They're representative of your target market, meaning their environment, use cases, and constraints match your broader customer base. They're tolerant of early-stage products and understand pilots involve some rough edges. Most importantly, they're motivated by the outcome and they genuinely need the problem solved, not just doing you a favor. They're also accessible for support so you can reach them quickly when issues arise.

Watch out for red flags that signal a poor fit. "We'll try anything" often means lack of genuine need and lack of engagement. Extreme or unusual requirements represent edge cases that don't represent your market. Difficult logistics like remote locations, complex approval processes, or limited access windows create unnecessary friction. Adversarial relationships where customers see the pilot as leverage rather than partnership rarely end well. No clear decision-maker means you'll struggle to close—if you can't identify who will decide whether to continue, move on.

How many pilot customers?

For most smart connected products, at least 3-5 pilot customers with 10 or more total devices provides enough data to make confident decisions without overwhelming your support capacity. Start with 1-2 customers in Wave 1, then expand.

Pilot Pricing and Commercial Terms

How you price your pilot affects both customer commitment and your ability to transition to paid contracts.

Free pilots have a lower barrier to entry, but customers may not take them seriously. Consider free pilots only for strategic lighthouse customers or when you need reference accounts. Discounted pilots are a common approach—offer 50-75% off production pricing to create customer investment while acknowledging the early-stage nature, and include a clear path to full pricing post-pilot. Paid pilots at production pricing provide the strongest signal of product-market fit, but are harder to close. Consider this only when you have high confidence in the product and strong customer demand.

Your pilot agreements should include duration (typically 8-12 weeks), number of devices and locations, success criteria and how they'll be measured, support expectations (response times, escalation paths), data ownership and privacy terms, transition terms if the pilot succeeds, and exit terms if the pilot fails.

Blues‑Specific Pilot Architecture Notes

Treat Notecard and Notehub as production infrastructure from day one. Begin with provisioning that mirrors your future deployment flow, for example: scan the QR code, claim the device, and assign it to the intended fleet. As soon as a device comes online, exercise your Routes in Notehub and verify that events arrive with the exact data schema your downstream cloud application(s) expect.

Treat device segmentation as a first-class requirement. Organize pilot devices into fleets that reflect how you'll run the pilot in the real world, typically by pilot phase (e.g. "alpha" to "limited pilot" to "expanded pilot") and/or by customer/site.

Take advantage of Notehub Smart Fleets to automatically assign devices to fleets based on parameters you define.

Design your telemetry plan around the questions you need to answer, not around what's easy to emit. Implement a steady/consistent heartbeat across cell and GPS (if used), create alerts on unexpected state change, and model data usage per device per day.

Measure active and idle current requirements in the field so your battery projections are as realistic as possible. If your model and your measurements don't agree, change the model or the firmware now. Plan DFU implementations ahead of time by staging firmware updates to a small cohort of devices. Monitor for regressions or bricked devices, and maintain a path for rolling back to previous firmware versions.

note

Be sure to direct your firmware and cloud engineering teams to our guide on Best Practices for Production-Ready Projects and consult our Cloud Architecture Guide for additional information on leveraging Notehub.

Define Your Deployment Scope and Timeline

Each pilot deployment will have a different scope and timeline based on product requirements. Here is one example timeline that you can use as a basis for your own product:

During Week 0–1 (Readiness), finalize your deployment runbook, train installers, stage devices, and create baseline dashboards. In Week 2–3 (Wave 1, 30% of devices), deploy to low-risk sites, validate activations, verify data completeness, and fix top issues fast. Week 4–6 (Wave 2, 50% of devices) adds more complex environments that push RF and/or power needs—begin DFU to a small cohort and verify remote fix and rollback motions. Week 7–9 (Wave 3, remaining 20%) validates edge cases and uses learnings to improve installation and support runbooks. Finally, Week 10–12 (Close Pilot) is when you review KPIs, update the economic model with learnings, freeze engineering changes, and implement your scale plan with a go/no-go decision (or implement changes and restart the pilot).

Pilot Failure Modes and Mitigation

Understanding common failure modes helps you prevent them—or recognize them early enough to course-correct.

Failure ModeWarning SignsMitigation
Technical failuresDevices offline, data gaps, connectivity issuesExtensive pre-pilot testing; instrument everything; have rollback plans
Operational failuresFailed installs, slow support response, runbook gapsTrain installers; test support motion before deployment; iterate runbooks
Commercial failuresCustomer disengagement, unclear ROI, no path to purchaseDefine success criteria upfront; communicate value regularly; engage economic buyer
Scope creepNew requirements mid-pilot, expanding device count, changing success criteriaLock scope in pilot charter; document change requests; negotiate timeline extensions
Champion lossKey contact leaves, reorganization, priority shiftsBuild relationships across the organization; document decisions; engage executive sponsor
Data quality issuesMissing data, incorrect timestamps, schema mismatchesValidate data pipeline before deployment; monitor in real-time; have data QA checkpoints

When to kill a pilot: Sometimes the right decision is to end a pilot early! Consider this when technical issues are fundamental and can't be resolved without major rework, when customer engagement has collapsed and can't be recovered, when market conditions have changed (competitor, regulation, customer business shift), or when resource constraints make it impossible to support the pilot properly.

Ending a pilot is not failure, it's learning! Document what went wrong, what you'd do differently, and whether the project should be restarted after fixes.

Pre-Determine Your Pilot Exit Criteria

The decisions you need to make at the end of a pilot are easy when the rules are clearly defined ahead of time. Before the first device ships, define what "green" means for technical, operational, and commercial outcomes and who owns the call. On the technical side, commit to specific thresholds (for reliability, completeness, power, and DFU success) and a documented path to green if any metric misses, with a short, dated remediation plan.

Operationally, insist that installs meet target rates and that the support motion proves it can acknowledge and resolve issues within the agreed windows. If remote fixes can't clear the bulk of incidents during the pilot, the deployments at scale may not survive. Commercially, require evidence that the market will buy what you intend to sell; a validated pricing model with a handful of paid pilots are ideal. Close with governance, ensuring that security and privacy checks are complete, exceptions documented, and an executive sign‑off that authorizes a path to scale.

Pilot Charter Template

A pilot charter should fit on a single page and cover eight essential elements.

  1. Start with your Pilot Objective (e.g., "Validate remote diagnostics reduces warranty claims by 40% within 3 months").
  2. Define your Scope including cohort size, sites, and environments.
  3. List your Key Metrics, meaning the top 6–10 metrics with Red/Yellow/Green thresholds.
  4. Include your Timeline with steps and dates, Roles & Governance based on your RACI model, and Risks & Mitigations covering your top five risks with owners for each.
  5. Document your Unit Economics showing one-time costs (hardware) and ongoing costs (cloud and data) per device.
  6. Finally, specify your Success/Exit Criteria that define what "success" means for this pilot.

Deployment Site Intake Form

Your intake form should capture critical site information before each deployment.

  1. Document the site contact with contact info for a single responsible party at the site.
  2. Define the access window for installation and follow-ups.
  3. Record environment notes about possible environmental limitations regarding temperature, vibrations, and network signal limitations.
  4. Document power availability and constraints at the site and note any variations from other deployments.
  5. Finally, define safety and compliance requirements around the product and any short-term certification/compliance issues.

In addition, you may want to provide a separate installation checklist for each deployment location, like this:

  • Unbox, verify device components, align antenna
  • Mount per specifications, connect to power
  • Scan QR code, claim device, assign to fleet
  • Wait for first data, verify key readings, measure first results
  • Take photos of deployment, document any issues
  • Provide quick user orientation

Success Metrics Template

Be sure to take your defined success metrics and document them in table format for easy digestion. For example:

MetricTargetRedYellowGreenOwnerSource
Time-to-First-Data (min)≤ 5106–10≤5EngNotehub
Event Sync Reliability (%)≥ 99<9595–98≥99EngNotehub
Remote-fix Rate (%)≥ 50<3030–49≥50SupportTicketing System
KB/device/day≤ 203021–30≤20Cloud EngNotehub

Data Plan and Event Usage Calculator

Document your expected data and event usage so you can compare them to actuals:

EventCadenceBytes/EventDaily EventsDaily Expected Bytes
Heartbeat12 hr3002600
Alerton error600< 10
Readings15 min1,20096115,200
~100 events/day~113 KB/day
note

When estimating data usage, be sure to add a 20-30% buffer for overhead and possible connectivity retries.

Phase 4: Execute and Measure

Design is complete, devices are staged, and customers are selected. Phase 4 is where the pilot moves into live operation. The checklists below cover site readiness, installation, early validation, and ongoing operations. Work through them in order, wave by wave, and each wave should be stable before you expand to the next.

Deployment Site Readiness (for each location)

  • Site contact confirmed with access window documented
  • Review power requirements, mounting availability, and enclosure fit
  • Note network coverage expectations
  • Backhaul tests planned (e.g. WiFi to cell and/or cell to sat)

Day of Installation

  • Scan QR code to claim device and verify fleet assignment in Notehub
  • Validate Time‑to‑First‑Data within specified target
  • Verify initial data completeness (data schema and values)
  • If needed, attach photos and install notes for support
  • Train customer on cloud application and/or possible alerts (SMS or email)

Early Post-Deployment Validation (First 24–72 Hours)

  • Confirm expected event cadence matches actual device behavior
  • Validate signal strength and connectivity stability
  • Review battery consumption versus projected model
  • Monitor data usage per device per day against estimates
  • Confirm dashboards and downstream integrations reflect live data

Operations During Pilot

  • Daily triage the top issues, making sure to define an owner
  • Track first-try install success rate across deployment waves
  • Measure remote resolution rate versus field visits
  • Monitor support response times and escalation patterns
  • Review customer feedback regularly
  • Stage DFU updates to small cohorts and monitor rollout success
  • Verify rollback capability remains intact

Pilot Device Log

Document each piloted device with its deployed firmware, expected fleet assignment, and other notes. For example:

DeviceLocationSNFirmwareFleetDateRadioNotes
dev:123456Syracuse, NYWIDGET-01v0.0.1Operational Pilot25-OCT-2025Cellular
dev:4567889Boston, MAWIDGET-02v0.0.2Requires Attention26-OCT-2025WiFi
dev:999999Lexington, KYWIDGET-03v0.0.1Operational Pilot29-OCT-2025LoRa

Phase 5: Decide and Scale

Making the Go/No-Go Decision

When the pilot closes, the decision owner should lead a formal review comparing actual results against the predefined thresholds. There are three possible outcomes:

  1. Scale to production. Success thresholds are met or exceeded, the operational motion is viable, and customer value is validated. Move to production planning.
  2. Iterate and re-run. Core value is validated, but specific technical or operational gaps require targeted fixes. Define a remediation scope and timeline before restarting.
  3. Stop. Fundamental value or feasibility thresholds were not met. Document lessons learned and redirect resources.

For any missed thresholds, determine whether the gap is minor and correctable, structural but fixable, or fundamental to the product or model. Use pilot data to update your assumptions around battery life projections, data usage models, support effort per device, installation time and friction, and unit economics. Remember that pilot economics will not match production economics—use pilot data to validate directionality, not final margins.

A "stop" decision is not failure—it's disciplined learning that saves you from scaling something that doesn't work.

Planning the Transition to Production

If the decision is to scale, define the next stage clearly and move quickly. Momentum decays fast between pilot close and production launch.

  • Freeze hardware and firmware versions where appropriate
  • Finalize cloud architecture and monitoring patterns
  • Complete certification and compliance requirements
  • Refine support processes for volume
  • Convert pilot customers into production deployments

Post-Pilot Customer Retention

A successful pilot means nothing if customers don't convert to paying production deployments. Plan for retention from day one.

During the pilot, build relationships beyond the technical contact by engaging the economic buyer regularly. Document and communicate wins early and often, capture customer quotes and case study material (with permission), and identify expansion opportunities like more devices, additional locations, or new use cases.

At pilot close, schedule a formal review meeting with all stakeholders to present results against the agreed success criteria. Propose a clear next step, whether that's a production contract, expanded pilot, or wind-down. For successful pilots, create urgency around transition timing.

Watch out for common retention killers like slow transition from pilot to production (momentum dies), pricing surprises when moving to production, loss of the internal champion who drove the pilot, and unresolved issues that were "tolerated" during pilot but won't be at scale.

Pilot Exit Checklist

  • Go through the KPI expectations vs reality, justify exceptions if needed
  • Update economic model with real data based on paying pilots
  • Capture customer retrospectives, quotes for marketing, and willingness to continue
  • Pull trigger on "go vs no-go" for next phase

Resources and Next Steps

A well-planned pilot sets the foundation for a successful product launch, giving you the data and confidence needed to scale. The following resources can help you refine your hardware design, get expert feedback, and connect with the Blues community.

Blues Resources

  • Notecard Walkthrough
  • Notehub Walkthrough
  • Blues Design Review Program

Getting Help

If you have additional questions about planning a pilot for your Blues-based product:

  • Post questions on the Blues Community Forum
  • Contact Blues sales for enterprise-level support
Antenna Guide Product Certification Guide

In This Article

  • In This Guide
  • Phase 1: Define What This Pilot Must Prove
    • Understand the Value of Running a Pilot
    • Define Your Value Creation and Value Capture Hypotheses
  • Phase 2: Confirm You Are Ready
    • Pilot Prerequisites
    • Pre-Pilot Readiness Checklist
    • Confirm Operational Viability
  • Phase 3: Design the Pilot Intentionally
    • Design Principles for a Successful Smart Connected Product Pilot
    • Choose Your Success Metrics
      • Technical Metrics and Device Health
      • Operational & Support
      • Commercial & Outcome
      • End Customer Functionality
    • Define Roles via a RACI Matrix
      • Roles
      • Pre-Pilot Planning RACI Matrix Example
    • Keep Your Economic Buyer Top-of-Mind
    • Selecting the Right Pilot Customers
    • Pilot Pricing and Commercial Terms
    • Blues‑Specific Pilot Architecture Notes
    • Define Your Deployment Scope and Timeline
    • Pilot Failure Modes and Mitigation
    • Pre-Determine Your Pilot Exit Criteria
    • Pilot Charter Template
    • Deployment Site Intake Form
    • Success Metrics Template
    • Data Plan and Event Usage Calculator
  • Phase 4: Execute and Measure
    • Deployment Site Readiness (for each location)
    • Day of Installation
    • Early Post-Deployment Validation (First 24–72 Hours)
    • Operations During Pilot
    • Pilot Device Log
  • Phase 5: Decide and Scale
    • Making the Go/No-Go Decision
    • Planning the Transition to Production
    • Post-Pilot Customer Retention
    • Pilot Exit Checklist
  • Resources and Next Steps
    • Blues Resources
    • Getting Help
© 2026 Blues Inc.
© 2026 Blues Inc.
TermsPrivacy