init
production ¤
Production Events and Tracking
This module contains both event detection and daily tracking tools for production:
Event Detection Classes: - MachineStateEvents: Run/idle intervals and transition points from a boolean state signal. - detect_run_idle: Intervalize run/idle with optional min duration. - transition_events: Point events on idle→run and run→idle changes.
- LineThroughputEvents: Throughput metrics and takt adherence.
- count_parts: Parts per fixed window from a counter uuid.
-
takt_adherence: Cycle time violations vs. a takt time.
-
ChangeoverEvents: Product/recipe changes and end-of-changeover derivation.
- detect_changeover: Point events at product value changes.
-
changeover_window: End via fixed window or stable band metrics.
-
FlowConstraintEvents: Blocked/starved intervals between upstream/downstream run signals.
- blocked_events: Upstream running while downstream not consuming.
- starved_events: Downstream running while upstream not supplying.
Daily Production Tracking Classes: - PartProductionTracking: Track production quantities by part number. - production_by_part: Production quantity per time window. - daily_production_summary: Daily totals by part. - production_totals: Totals over date ranges.
- CycleTimeTracking: Analyze cycle times by part number.
- cycle_time_by_part: Calculate cycle times.
- cycle_time_statistics: Statistical analysis (min/avg/max/std).
- detect_slow_cycles: Anomaly detection.
-
cycle_time_trend: Trend analysis.
-
ShiftReporting: Shift-based performance analysis.
- shift_production: Production per shift.
- shift_comparison: Compare shift performance.
- shift_targets: Target vs actual analysis.
-
best_and_worst_shifts: Performance ranking.
-
DowntimeTracking: Machine availability and downtime analysis.
- downtime_by_shift: Downtime and availability per shift.
- downtime_by_reason: Root cause analysis.
- top_downtime_reasons: Pareto analysis (80/20 rule).
-
availability_trend: Track availability over time.
-
QualityTracking: NOK (defective parts) and quality metrics.
- nok_by_shift: NOK parts and First Pass Yield per shift.
- quality_by_part: Quality metrics by part number.
- nok_by_reason: Defect type analysis.
- daily_quality_summary: Daily quality rollup.
MachineStateEvents ¤
MachineStateEvents(
dataframe: DataFrame,
run_state_uuid: str,
*,
event_uuid: str = "prod:run_idle",
value_column: str = "value_bool",
time_column: str = "systime"
)
Bases: Base
Production: Machine State
Detect run/idle transitions and intervals from a boolean state signal.
- MachineStateEvents: Run/idle state intervals and transitions.
- detect_run_idle: Intervalize run/idle states with optional min duration filter.
- transition_events: Point events on state changes (idle->run, run->idle).
detect_run_idle ¤
detect_run_idle(min_duration: str = '0s') -> pd.DataFrame
Return intervals labeled as 'run' or 'idle'.
- min_duration: discard intervals shorter than this duration. Columns: start, end, uuid, source_uuid, is_delta, state, duration_seconds
transition_events ¤
transition_events() -> pd.DataFrame
Return point events at state transitions.
Columns: systime, uuid, source_uuid, is_delta, transition ('idle_to_run'|'run_to_idle'), time_since_last_transition_seconds
detect_rapid_transitions ¤
detect_rapid_transitions(
threshold: str = "5s", min_count: int = 3
) -> pd.DataFrame
Identify suspicious rapid state changes.
- threshold: time window to look for rapid transitions
- min_count: minimum number of transitions within threshold to be considered rapid Returns: DataFrame with start_time, end_time, transition_count, duration_seconds
state_quality_metrics ¤
state_quality_metrics() -> Dict[str, Any]
Return quality metrics for the state data.
Returns dictionary with: - total_transitions: total number of state transitions - avg_run_duration: average duration of run states in seconds - avg_idle_duration: average duration of idle states in seconds - run_idle_ratio: ratio of run time to idle time - data_gaps_detected: number of data gaps found - rapid_transitions_detected: number of rapid transition events
LineThroughputEvents ¤
LineThroughputEvents(
dataframe: DataFrame,
*,
event_uuid: str = "prod:throughput",
time_column: str = "systime"
)
Bases: Base
Production: Line Throughput
Methods: - count_parts: Part counts per fixed window from a monotonically increasing counter. - takt_adherence: Cycle time violations against a takt time from step/boolean triggers.
count_parts ¤
count_parts(
counter_uuid: str,
*,
value_column: str = "value_integer",
window: str = "1m"
) -> pd.DataFrame
Compute parts per window for a counter uuid.
Returns columns: window_start, uuid, source_uuid, is_delta, count
takt_adherence ¤
takt_adherence(
cycle_uuid: str,
*,
value_column: str = "value_bool",
takt_time: str = "60s",
min_violation: str = "0s"
) -> pd.DataFrame
Flag cycles whose durations exceed the takt_time.
For boolean triggers: detect True rising edges as cycle boundaries. For integer steps: detect increments as cycle boundaries.
Returns: systime (at boundary), uuid, source_uuid, is_delta, cycle_time_seconds, violation
throughput_oee ¤
throughput_oee(
counter_uuid: str,
*,
value_column: str = "value_integer",
window: str = "1h",
target_rate: Optional[float] = None,
availability_threshold: float = 0.95
) -> pd.DataFrame
Calculate Overall Equipment Effectiveness (OEE) metrics.
OEE = Availability × Performance × Quality
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
counter_uuid
|
str
|
UUID for the part counter signal |
required |
value_column
|
str
|
Column containing counter values |
'value_integer'
|
window
|
str
|
Time window for aggregation |
'1h'
|
target_rate
|
Optional[float]
|
Target production rate (parts per window). If None, uses max observed |
None
|
availability_threshold
|
float
|
Threshold for considering equipment available |
0.95
|
Returns:
| Type | Description |
|---|---|
DataFrame
|
DataFrame with columns: window_start, uuid, source_uuid, is_delta, |
DataFrame
|
actual_count, target_count, availability, performance, oee_score |
throughput_trends ¤
throughput_trends(
counter_uuid: str,
*,
value_column: str = "value_integer",
window: str = "1h",
trend_window: int = 24
) -> pd.DataFrame
Analyze throughput trends with moving averages and degradation detection.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
counter_uuid
|
str
|
UUID for the part counter signal |
required |
value_column
|
str
|
Column containing counter values |
'value_integer'
|
window
|
str
|
Time window for counting parts |
'1h'
|
trend_window
|
int
|
Number of windows for trend calculation |
24
|
Returns:
| Type | Description |
|---|---|
DataFrame
|
DataFrame with throughput, moving average, trend direction, and degradation flag |
cycle_quality_check ¤
cycle_quality_check(
cycle_uuid: str,
*,
value_column: str = "value_bool",
expected_cycle_time: Optional[float] = None,
tolerance_pct: float = 0.1
) -> pd.DataFrame
Enhanced cycle detection with quality validation.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
cycle_uuid
|
str
|
UUID for the cycle trigger signal |
required |
value_column
|
str
|
Column containing cycle trigger (bool/integer) |
'value_bool'
|
expected_cycle_time
|
Optional[float]
|
Expected cycle time in seconds. If None, uses median |
None
|
tolerance_pct
|
float
|
Tolerance percentage for cycle time validation |
0.1
|
Returns:
| Type | Description |
|---|---|
DataFrame
|
DataFrame with cycle times, validation status, and quality flags |
ChangeoverEvents ¤
ChangeoverEvents(
dataframe: DataFrame,
*,
event_uuid: str = "prod:changeover",
time_column: str = "systime"
)
Bases: Base
Production: Changeover
Detect product/recipe changes and compute changeover windows without requiring a dedicated 'first good' signal.
Methods: - detect_changeover: point events when product/recipe changes. - changeover_window: derive an end time via fixed window or 'stable_band' metrics.
detect_changeover ¤
detect_changeover(
product_uuid: str,
*,
value_column: str = "value_string",
min_hold: str = "0s"
) -> pd.DataFrame
Emit point events when the product/recipe changes value.
Uses a hold check: the new product must persist for at least min_hold until the next change.
changeover_window ¤
changeover_window(
product_uuid: str,
*,
value_column: str = "value_string",
start_time: Optional[Timestamp] = None,
until: str = "fixed_window",
config: Optional[Dict[str, Any]] = None,
fallback: Optional[Dict[str, Any]] = None
) -> pd.DataFrame
Compute changeover windows per product change with enhanced configurability.
until
- fixed_window: end = start + config['duration'] (e.g., '10m')
- stable_band: end when all metrics stabilize within band for hold: config = { 'metrics': [ {'uuid': 'm1', 'value_column': 'value_double', 'band': 0.2, 'hold': '2m'}, ... ], 'reference_method': 'expanding_median' | 'rolling_mean' | 'ewma' | 'target_value', 'rolling_window': 5, # for rolling_mean (number of points) 'ewma_span': 10, # for ewma 'target_values': {'m1': 100.0, ...} # for target_value }
fallback: {'default_duration': '10m', 'completed': False}
changeover_quality_metrics ¤
changeover_quality_metrics(
product_uuid: str, *, value_column: str = "value_string"
) -> pd.DataFrame
Compute quality metrics for changeovers.
Returns metrics including: - changeover duration patterns - frequency statistics - time between changeovers - product-specific metrics
FlowConstraintEvents ¤
FlowConstraintEvents(
dataframe: DataFrame,
*,
time_column: str = "systime",
event_uuid: str = "prod:flow"
)
Bases: Base
Production: Flow Constraints
- blocked_events: upstream running while downstream not consuming.
- starved_events: downstream idle due to lack of upstream supply.
blocked_events ¤
blocked_events(
*,
roles: Dict[str, str],
tolerance: str = "200ms",
tolerance_before: Optional[str] = None,
tolerance_after: Optional[str] = None,
min_duration: str = "0s"
) -> pd.DataFrame
Blocked: upstream_run=True while downstream_run=False.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
roles
|
Dict[str, str]
|
Dictionary mapping role names to UUIDs. Expected keys: 'upstream_run', 'downstream_run' |
required |
tolerance
|
str
|
Default tolerance for time alignment (used if directional tolerances not provided) |
'200ms'
|
tolerance_before
|
Optional[str]
|
Tolerance for looking backward in time during alignment |
None
|
tolerance_after
|
Optional[str]
|
Tolerance for looking forward in time during alignment |
None
|
min_duration
|
str
|
Minimum duration for an event to be included |
'0s'
|
Returns:
| Type | Description |
|---|---|
DataFrame
|
DataFrame with columns: start, end, uuid, source_uuid, is_delta, type, |
DataFrame
|
time_alignment_quality, duration, severity |
Example
roles = {'upstream_run': 'uuid1', 'downstream_run': 'uuid2'}
starved_events ¤
starved_events(
*,
roles: Dict[str, str],
tolerance: str = "200ms",
tolerance_before: Optional[str] = None,
tolerance_after: Optional[str] = None,
min_duration: str = "0s"
) -> pd.DataFrame
Starved: downstream_run=True while upstream_run=False.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
roles
|
Dict[str, str]
|
Dictionary mapping role names to UUIDs. Expected keys: 'upstream_run', 'downstream_run' |
required |
tolerance
|
str
|
Default tolerance for time alignment (used if directional tolerances not provided) |
'200ms'
|
tolerance_before
|
Optional[str]
|
Tolerance for looking backward in time during alignment |
None
|
tolerance_after
|
Optional[str]
|
Tolerance for looking forward in time during alignment |
None
|
min_duration
|
str
|
Minimum duration for an event to be included |
'0s'
|
Returns:
| Type | Description |
|---|---|
DataFrame
|
DataFrame with columns: start, end, uuid, source_uuid, is_delta, type, |
DataFrame
|
time_alignment_quality, duration, severity |
Example
roles = {'upstream_run': 'uuid1', 'downstream_run': 'uuid2'}
flow_constraint_analytics ¤
flow_constraint_analytics(
*,
roles: Dict[str, str],
tolerance: str = "200ms",
tolerance_before: Optional[str] = None,
tolerance_after: Optional[str] = None,
min_duration: str = "0s",
minor_threshold: str = "5s",
moderate_threshold: str = "30s"
) -> Dict[str, Any]
Generate comprehensive analytics for flow constraints (blockages and starvations).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
roles
|
Dict[str, str]
|
Dictionary mapping role names to UUIDs. Expected keys: 'upstream_run', 'downstream_run' |
required |
tolerance
|
str
|
Default tolerance for time alignment (used if directional tolerances not provided) |
'200ms'
|
tolerance_before
|
Optional[str]
|
Tolerance for looking backward in time during alignment |
None
|
tolerance_after
|
Optional[str]
|
Tolerance for looking forward in time during alignment |
None
|
min_duration
|
str
|
Minimum duration for an event to be included |
'0s'
|
minor_threshold
|
str
|
Duration threshold for minor severity classification |
'5s'
|
moderate_threshold
|
str
|
Duration threshold for moderate severity classification |
'30s'
|
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
Dictionary containing analytics for both blocked and starved events: |
Dict[str, Any]
|
|
Dict[str, Any]
|
|
Dict[str, Any]
|
|
Example
roles = {'upstream_run': 'uuid1', 'downstream_run': 'uuid2'} analytics = flow.flow_constraint_analytics(roles=roles) print(analytics['summary']['blocked_count'])
PartProductionTracking ¤
PartProductionTracking(
dataframe: DataFrame, *, time_column: str = "systime"
)
Bases: Base
Track production quantities by part number.
Each UUID represents one signal: - part_id_uuid: string signal with current part number - counter_uuid: monotonic counter for production count
Example usage
tracker = PartProductionTracking(df)
Hourly production by part¤
hourly = tracker.production_by_part( part_id_uuid='part_number_signal', counter_uuid='counter_signal', window='1h' )
Daily summary¤
daily = tracker.daily_production_summary( part_id_uuid='part_number_signal', counter_uuid='counter_signal' )
Initialize part production tracker.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
dataframe
|
DataFrame
|
Input DataFrame with timeseries data |
required |
time_column
|
str
|
Name of timestamp column (default: 'systime') |
'systime'
|
production_by_part ¤
production_by_part(
part_id_uuid: str,
counter_uuid: str,
*,
window: str = "1h",
value_column_part: str = "value_string",
value_column_counter: str = "value_integer"
) -> pd.DataFrame
Calculate production quantity per part number.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
part_id_uuid
|
str
|
UUID for part number signal |
required |
counter_uuid
|
str
|
UUID for production counter |
required |
window
|
str
|
Time window for aggregation (e.g., '1h', '8h', '1d') |
'1h'
|
value_column_part
|
str
|
Column containing part numbers |
'value_string'
|
value_column_counter
|
str
|
Column containing counter values |
'value_integer'
|
Returns:
| Type | Description |
|---|---|
DataFrame
|
DataFrame with columns: |
DataFrame
|
|
DataFrame
|
|
DataFrame
|
|
DataFrame
|
|
DataFrame
|
|
Example
production_by_part('part_id', 'counter', window='1h') window_start part_number quantity first_count last_count 0 2024-01-01 08:00:00 PART_A 150 1000 1150 1 2024-01-01 09:00:00 PART_A 145 1150 1295 2 2024-01-01 10:00:00 PART_B 98 1295 1393
daily_production_summary ¤
daily_production_summary(
part_id_uuid: str,
counter_uuid: str,
*,
value_column_part: str = "value_string",
value_column_counter: str = "value_integer"
) -> pd.DataFrame
Daily production summary by part number.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
part_id_uuid
|
str
|
UUID for part number signal |
required |
counter_uuid
|
str
|
UUID for production counter |
required |
value_column_part
|
str
|
Column containing part numbers |
'value_string'
|
value_column_counter
|
str
|
Column containing counter values |
'value_integer'
|
Returns:
| Type | Description |
|---|---|
DataFrame
|
DataFrame with columns: |
DataFrame
|
|
DataFrame
|
|
DataFrame
|
|
DataFrame
|
|
Example
daily_production_summary('part_id', 'counter') date part_number total_quantity hours_active 0 2024-01-01 PART_A 1200 8 1 2024-01-01 PART_B 850 6 2 2024-01-02 PART_A 1150 8
production_totals ¤
production_totals(
part_id_uuid: str,
counter_uuid: str,
*,
start_date: Optional[str] = None,
end_date: Optional[str] = None,
value_column_part: str = "value_string",
value_column_counter: str = "value_integer"
) -> pd.DataFrame
Total production by part number for a date range.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
part_id_uuid
|
str
|
UUID for part number signal |
required |
counter_uuid
|
str
|
UUID for production counter |
required |
start_date
|
Optional[str]
|
Start date 'YYYY-MM-DD' (optional) |
None
|
end_date
|
Optional[str]
|
End date 'YYYY-MM-DD' (optional) |
None
|
value_column_part
|
str
|
Column containing part numbers |
'value_string'
|
value_column_counter
|
str
|
Column containing counter values |
'value_integer'
|
Returns:
| Type | Description |
|---|---|
DataFrame
|
DataFrame with total production per part |
Example
production_totals('part_id', 'counter', ... start_date='2024-01-01', end_date='2024-01-07') part_number total_quantity days_produced 0 PART_A 8450 5 1 PART_B 6200 4
CycleTimeTracking ¤
CycleTimeTracking(
dataframe: DataFrame, *, time_column: str = "systime"
)
Bases: Base
Track cycle times by part number.
Each UUID represents one signal: - part_id_uuid: string signal with current part number - cycle_trigger_uuid: boolean/integer signal for cycle completion
Example usage
tracker = CycleTimeTracking(df)
Get cycle times by part¤
cycles = tracker.cycle_time_by_part( part_id_uuid='part_number_signal', cycle_trigger_uuid='cycle_complete_signal' )
Get statistics¤
stats = tracker.cycle_time_statistics( part_id_uuid='part_number_signal', cycle_trigger_uuid='cycle_complete_signal' )
Initialize cycle time tracker.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
dataframe
|
DataFrame
|
Input DataFrame with timeseries data |
required |
time_column
|
str
|
Name of timestamp column (default: 'systime') |
'systime'
|
cycle_time_by_part ¤
cycle_time_by_part(
part_id_uuid: str,
cycle_trigger_uuid: str,
*,
value_column_part: str = "value_string",
value_column_trigger: str = "value_bool"
) -> pd.DataFrame
Calculate cycle time for each part number.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
part_id_uuid
|
str
|
UUID for part number signal |
required |
cycle_trigger_uuid
|
str
|
UUID for cycle completion trigger |
required |
value_column_part
|
str
|
Column containing part numbers |
'value_string'
|
value_column_trigger
|
str
|
Column containing cycle triggers |
'value_bool'
|
Returns:
| Type | Description |
|---|---|
DataFrame
|
DataFrame with columns: |
DataFrame
|
|
DataFrame
|
|
DataFrame
|
|
Example
cycle_time_by_part('part_id', 'cycle_trigger') systime part_number cycle_time_seconds 0 2024-01-01 08:05:30 PART_A 45.2 1 2024-01-01 08:06:18 PART_A 48.0 2 2024-01-01 08:07:05 PART_A 47.1
cycle_time_statistics ¤
cycle_time_statistics(
part_id_uuid: str,
cycle_trigger_uuid: str,
*,
value_column_part: str = "value_string",
value_column_trigger: str = "value_bool"
) -> pd.DataFrame
Calculate statistics: min, avg, max, std cycle time by part.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
part_id_uuid
|
str
|
UUID for part number signal |
required |
cycle_trigger_uuid
|
str
|
UUID for cycle completion trigger |
required |
value_column_part
|
str
|
Column containing part numbers |
'value_string'
|
value_column_trigger
|
str
|
Column containing cycle triggers |
'value_bool'
|
Returns:
| Type | Description |
|---|---|
DataFrame
|
DataFrame with statistics per part: |
DataFrame
|
|
DataFrame
|
|
DataFrame
|
|
DataFrame
|
|
DataFrame
|
|
DataFrame
|
|
DataFrame
|
|
Example
cycle_time_statistics('part_id', 'cycle_trigger') part_number count min_seconds avg_seconds max_seconds std_seconds median_seconds 0 PART_A 450 42.1 47.5 58.2 3.2 47.1 1 PART_B 320 55.0 62.8 78.5 5.1 61.9
detect_slow_cycles ¤
detect_slow_cycles(
part_id_uuid: str,
cycle_trigger_uuid: str,
*,
threshold_factor: float = 1.5,
value_column_part: str = "value_string",
value_column_trigger: str = "value_bool"
) -> pd.DataFrame
Identify cycles that exceed normal time by threshold factor.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
part_id_uuid
|
str
|
UUID for part number signal |
required |
cycle_trigger_uuid
|
str
|
UUID for cycle completion trigger |
required |
threshold_factor
|
float
|
Cycles slower than median * factor are flagged (default: 1.5) |
1.5
|
value_column_part
|
str
|
Column containing part numbers |
'value_string'
|
value_column_trigger
|
str
|
Column containing cycle triggers |
'value_bool'
|
Returns:
| Type | Description |
|---|---|
DataFrame
|
DataFrame with slow cycles: |
DataFrame
|
|
DataFrame
|
|
DataFrame
|
|
DataFrame
|
|
DataFrame
|
|
DataFrame
|
|
Example
detect_slow_cycles('part_id', 'cycle_trigger', threshold_factor=1.5) systime part_number cycle_time_seconds median_seconds deviation_factor is_slow 0 2024-01-01 10:15:30 PART_A 75.2 47.1 1.60 True 1 2024-01-01 14:22:18 PART_A 82.5 47.1 1.75 True
cycle_time_trend ¤
cycle_time_trend(
part_id_uuid: str,
cycle_trigger_uuid: str,
part_number: str,
*,
window_size: int = 20,
value_column_part: str = "value_string",
value_column_trigger: str = "value_bool"
) -> pd.DataFrame
Analyze cycle time trends for a specific part.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
part_id_uuid
|
str
|
UUID for part number signal |
required |
cycle_trigger_uuid
|
str
|
UUID for cycle completion trigger |
required |
part_number
|
str
|
Specific part number to analyze |
required |
window_size
|
int
|
Number of cycles for moving average (default: 20) |
20
|
value_column_part
|
str
|
Column containing part numbers |
'value_string'
|
value_column_trigger
|
str
|
Column containing cycle triggers |
'value_bool'
|
Returns:
| Type | Description |
|---|---|
DataFrame
|
DataFrame with trend data: |
DataFrame
|
|
DataFrame
|
|
DataFrame
|
|
DataFrame
|
|
Example
cycle_time_trend('part_id', 'cycle_trigger', 'PART_A') systime cycle_time_seconds moving_avg trend 0 2024-01-01 08:05:30 45.2 47.1 improving 1 2024-01-01 08:06:18 48.0 47.2 stable 2 2024-01-01 08:07:05 47.1 47.1 stable
hourly_cycle_time_summary ¤
hourly_cycle_time_summary(
part_id_uuid: str,
cycle_trigger_uuid: str,
*,
value_column_part: str = "value_string",
value_column_trigger: str = "value_bool"
) -> pd.DataFrame
Hourly summary of cycle times by part.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
part_id_uuid
|
str
|
UUID for part number signal |
required |
cycle_trigger_uuid
|
str
|
UUID for cycle completion trigger |
required |
value_column_part
|
str
|
Column containing part numbers |
'value_string'
|
value_column_trigger
|
str
|
Column containing cycle triggers |
'value_bool'
|
Returns:
| Type | Description |
|---|---|
DataFrame
|
DataFrame with hourly statistics: |
DataFrame
|
|
DataFrame
|
|
DataFrame
|
|
DataFrame
|
|
DataFrame
|
|
DataFrame
|
|
Example
hourly_cycle_time_summary('part_id', 'cycle_trigger') hour part_number cycles_completed avg_cycle_time min_cycle_time max_cycle_time 0 2024-01-01 08:00:00 PART_A 75 47.2 42.1 55.8 1 2024-01-01 09:00:00 PART_A 78 46.8 43.0 52.3
ShiftReporting ¤
ShiftReporting(
dataframe: DataFrame,
*,
time_column: str = "systime",
shift_definitions: Optional[
Dict[str, Tuple[str, str]]
] = None
)
Bases: Base
Simple shift-based production reporting.
Each UUID represents one signal: - counter_uuid: production counter - part_id_uuid: part number (optional)
Example usage
reporter = ShiftReporting(df, shift_definitions={ "day": ("06:00", "14:00"), "afternoon": ("14:00", "22:00"), "night": ("22:00", "06:00"), })
Production per shift¤
shift_prod = reporter.shift_production( counter_uuid='counter_signal', part_id_uuid='part_number_signal' )
Compare shifts¤
comparison = reporter.shift_comparison(counter_uuid='counter_signal')
Initialize shift reporter.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
dataframe
|
DataFrame
|
Input DataFrame with timeseries data |
required |
time_column
|
str
|
Name of timestamp column (default: 'systime') |
'systime'
|
shift_definitions
|
Optional[Dict[str, Tuple[str, str]]]
|
Dictionary mapping shift names to (start, end) times Default: 3-shift operation (06:00-14:00, 14:00-22:00, 22:00-06:00) |
None
|
Example shift_definitions
{ "shift_1": ("06:00", "14:00"), "shift_2": ("14:00", "22:00"), "shift_3": ("22:00", "06:00"), }
shift_production ¤
shift_production(
counter_uuid: str,
part_id_uuid: Optional[str] = None,
*,
value_column_counter: str = "value_integer",
value_column_part: str = "value_string",
date: Optional[str] = None
) -> pd.DataFrame
Production quantity per shift.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
counter_uuid
|
str
|
Production counter UUID |
required |
part_id_uuid
|
Optional[str]
|
Part number UUID (optional, for part-specific production) |
None
|
value_column_counter
|
str
|
Column containing counter values |
'value_integer'
|
value_column_part
|
str
|
Column containing part numbers |
'value_string'
|
date
|
Optional[str]
|
Specific date in 'YYYY-MM-DD' format (optional) |
None
|
Returns:
| Type | Description |
|---|---|
DataFrame
|
DataFrame with production by shift: |
DataFrame
|
|
DataFrame
|
|
DataFrame
|
|
DataFrame
|
|
Example
shift_production('counter', part_id_uuid='part_id') date shift part_number quantity 0 2024-01-01 shift_1 PART_A 450 1 2024-01-01 shift_2 PART_A 425 2 2024-01-01 shift_3 PART_A 380
shift_comparison ¤
shift_comparison(
counter_uuid: str,
*,
value_column_counter: str = "value_integer",
days: int = 7
) -> pd.DataFrame
Compare shift performance over recent days.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
counter_uuid
|
str
|
Production counter UUID |
required |
value_column_counter
|
str
|
Column containing counter values |
'value_integer'
|
days
|
int
|
Number of recent days to analyze (default: 7) |
7
|
Returns:
| Type | Description |
|---|---|
DataFrame
|
DataFrame with shift comparison: |
DataFrame
|
|
DataFrame
|
|
DataFrame
|
|
DataFrame
|
|
DataFrame
|
|
DataFrame
|
|
Example
shift_comparison('counter', days=7) shift avg_quantity min_quantity max_quantity std_quantity days_count 0 shift_1 445 420 465 15.2 7 1 shift_2 430 405 450 12.8 7 2 shift_3 385 360 410 18.5 7
shift_targets ¤
shift_targets(
counter_uuid: str,
targets: Dict[str, float],
*,
value_column_counter: str = "value_integer",
date: Optional[str] = None
) -> pd.DataFrame
Compare actual production to shift targets.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
counter_uuid
|
str
|
Production counter UUID |
required |
targets
|
Dict[str, float]
|
Dictionary mapping shift names to target quantities |
required |
value_column_counter
|
str
|
Column containing counter values |
'value_integer'
|
date
|
Optional[str]
|
Specific date in 'YYYY-MM-DD' format (optional) |
None
|
Returns:
| Type | Description |
|---|---|
DataFrame
|
DataFrame with target comparison: |
DataFrame
|
|
DataFrame
|
|
DataFrame
|
|
DataFrame
|
|
DataFrame
|
|
DataFrame
|
|
Example
shift_targets('counter', targets={'shift_1': 450, 'shift_2': 450, 'shift_3': 400}) date shift actual target variance achievement_pct 0 2024-01-01 shift_1 445 450 -5 98.9 1 2024-01-01 shift_2 465 450 15 103.3 2 2024-01-01 shift_3 390 400 -10 97.5
best_and_worst_shifts ¤
best_and_worst_shifts(
counter_uuid: str,
*,
value_column_counter: str = "value_integer",
days: int = 30
) -> Dict[str, pd.DataFrame]
Identify best and worst performing shifts.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
counter_uuid
|
str
|
Production counter UUID |
required |
value_column_counter
|
str
|
Column containing counter values |
'value_integer'
|
days
|
int
|
Number of recent days to analyze (default: 30) |
30
|
Returns:
| Type | Description |
|---|---|
Dict[str, DataFrame]
|
Dictionary with: |
Dict[str, DataFrame]
|
|
Dict[str, DataFrame]
|
|
Example
results = best_and_worst_shifts('counter') results['best'] date shift quantity 0 2024-01-15 shift_2 495 1 2024-01-18 shift_1 490 2 2024-01-22 shift_2 485
DowntimeTracking ¤
DowntimeTracking(
dataframe: DataFrame,
*,
time_column: str = "systime",
shift_definitions: Optional[
Dict[str, tuple[str, str]]
] = None
)
Bases: Base
Track machine downtimes by shift and reason.
Each UUID represents one signal: - state_uuid: machine state (running/stopped/idle) - reason_uuid: downtime reason code (optional)
Example usage
tracker = DowntimeTracking(df)
Downtime per shift¤
shift_downtime = tracker.downtime_by_shift( state_uuid='machine_state', running_value='Running' )
Downtime by reason¤
reason_analysis = tracker.downtime_by_reason( state_uuid='machine_state', reason_uuid='downtime_reason', stopped_value='Stopped' )
Initialize downtime tracker.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
dataframe
|
DataFrame
|
Input DataFrame with timeseries data |
required |
time_column
|
str
|
Name of timestamp column (default: 'systime') |
'systime'
|
shift_definitions
|
Optional[Dict[str, tuple[str, str]]]
|
Dictionary mapping shift names to (start, end) times Default: 3-shift operation (06:00-14:00, 14:00-22:00, 22:00-06:00) |
None
|
downtime_by_shift ¤
downtime_by_shift(
state_uuid: str,
*,
running_value: str = "Running",
value_column: str = "value_string"
) -> pd.DataFrame
Calculate downtime duration per shift.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
state_uuid
|
str
|
UUID for machine state signal |
required |
running_value
|
str
|
Value that indicates machine is running |
'Running'
|
value_column
|
str
|
Column containing state values |
'value_string'
|
Returns:
| Type | Description |
|---|---|
DataFrame
|
DataFrame with downtime by shift: |
DataFrame
|
|
DataFrame
|
|
DataFrame
|
|
DataFrame
|
|
DataFrame
|
|
DataFrame
|
|
Example
downtime_by_shift('machine_state', running_value='Running') date shift total_minutes downtime_minutes uptime_minutes availability_pct 0 2024-01-01 shift_1 480 45.2 434.8 90.6 1 2024-01-01 shift_2 480 67.5 412.5 85.9 2 2024-01-01 shift_3 480 92.0 388.0 80.8
downtime_by_reason ¤
downtime_by_reason(
state_uuid: str,
reason_uuid: str,
*,
stopped_value: str = "Stopped",
value_column_state: str = "value_string",
value_column_reason: str = "value_string"
) -> pd.DataFrame
Analyze downtime by reason code.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
state_uuid
|
str
|
UUID for machine state signal |
required |
reason_uuid
|
str
|
UUID for downtime reason signal |
required |
stopped_value
|
str
|
Value indicating machine is stopped |
'Stopped'
|
value_column_state
|
str
|
Column containing state values |
'value_string'
|
value_column_reason
|
str
|
Column containing reason codes |
'value_string'
|
Returns:
| Type | Description |
|---|---|
DataFrame
|
DataFrame with downtime by reason: |
DataFrame
|
|
DataFrame
|
|
DataFrame
|
|
DataFrame
|
|
DataFrame
|
|
Example
downtime_by_reason('state', 'reason', stopped_value='Stopped') reason occurrences total_minutes avg_minutes pct_of_total 0 Material_Shortage 12 145.5 12.1 35.2 1 Tool_Change 8 98.2 12.3 23.8 2 Quality_Issue 5 76.0 15.2 18.4
top_downtime_reasons ¤
top_downtime_reasons(
state_uuid: str,
reason_uuid: str,
*,
top_n: int = 5,
stopped_value: str = "Stopped",
value_column_state: str = "value_string",
value_column_reason: str = "value_string"
) -> pd.DataFrame
Get top N downtime reasons (Pareto analysis).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
state_uuid
|
str
|
UUID for machine state signal |
required |
reason_uuid
|
str
|
UUID for downtime reason signal |
required |
top_n
|
int
|
Number of top reasons to return |
5
|
stopped_value
|
str
|
Value indicating machine is stopped |
'Stopped'
|
value_column_state
|
str
|
Column containing state values |
'value_string'
|
value_column_reason
|
str
|
Column containing reason codes |
'value_string'
|
Returns:
| Type | Description |
|---|---|
DataFrame
|
DataFrame with top N reasons and cumulative percentage |
Example
top_downtime_reasons('state', 'reason', top_n=5) reason total_minutes pct_of_total cumulative_pct 0 Material_Shortage 145.5 35.2 35.2 1 Tool_Change 98.2 23.8 59.0 2 Quality_Issue 76.0 18.4 77.4
availability_trend ¤
availability_trend(
state_uuid: str,
*,
running_value: str = "Running",
value_column: str = "value_string",
window: str = "1D"
) -> pd.DataFrame
Calculate availability trend over time.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
state_uuid
|
str
|
UUID for machine state signal |
required |
running_value
|
str
|
Value that indicates machine is running |
'Running'
|
value_column
|
str
|
Column containing state values |
'value_string'
|
window
|
str
|
Time window for aggregation (e.g., '1D', '1W') |
'1D'
|
Returns:
| Type | Description |
|---|---|
DataFrame
|
DataFrame with availability trend: |
DataFrame
|
|
DataFrame
|
|
DataFrame
|
|
DataFrame
|
|
Example
availability_trend('state', window='1D') period availability_pct uptime_minutes downtime_minutes 0 2024-01-01 87.5 1260.0 180.0 1 2024-01-02 91.2 1313.3 126.7 2 2024-01-03 85.8 1235.5 204.5
QualityTracking ¤
QualityTracking(
dataframe: DataFrame,
*,
time_column: str = "systime",
shift_definitions: Optional[
Dict[str, tuple[str, str]]
] = None
)
Bases: Base
Track NOK (defective) parts and quality metrics.
Each UUID represents one signal: - ok_counter_uuid: counter for good parts - nok_counter_uuid: counter for defective parts - part_id_uuid: part number signal (optional) - defect_reason_uuid: defect reason code (optional)
Example usage
tracker = QualityTracking(df)
NOK parts per shift¤
shift_nok = tracker.nok_by_shift( ok_counter_uuid='good_parts', nok_counter_uuid='bad_parts' )
Quality by part number¤
part_quality = tracker.quality_by_part( ok_counter_uuid='good_parts', nok_counter_uuid='bad_parts', part_id_uuid='part_number' )
Initialize quality tracker.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
dataframe
|
DataFrame
|
Input DataFrame with timeseries data |
required |
time_column
|
str
|
Name of timestamp column (default: 'systime') |
'systime'
|
shift_definitions
|
Optional[Dict[str, tuple[str, str]]]
|
Dictionary mapping shift names to (start, end) times Default: 3-shift operation (06:00-14:00, 14:00-22:00, 22:00-06:00) |
None
|
nok_by_shift ¤
nok_by_shift(
ok_counter_uuid: str,
nok_counter_uuid: str,
*,
value_column: str = "value_integer"
) -> pd.DataFrame
Calculate NOK (defective) parts per shift.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
ok_counter_uuid
|
str
|
UUID for good parts counter |
required |
nok_counter_uuid
|
str
|
UUID for defective parts counter |
required |
value_column
|
str
|
Column containing counter values |
'value_integer'
|
Returns:
| Type | Description |
|---|---|
DataFrame
|
DataFrame with quality metrics by shift: |
DataFrame
|
|
DataFrame
|
|
DataFrame
|
|
DataFrame
|
|
DataFrame
|
|
DataFrame
|
|
DataFrame
|
|
Example
nok_by_shift('good_counter', 'bad_counter') date shift ok_parts nok_parts total_parts nok_rate_pct first_pass_yield_pct 0 2024-01-01 shift_1 450 12 462 2.6 97.4 1 2024-01-01 shift_2 425 18 443 4.1 95.9 2 2024-01-01 shift_3 380 25 405 6.2 93.8
quality_by_part ¤
quality_by_part(
ok_counter_uuid: str,
nok_counter_uuid: str,
part_id_uuid: str,
*,
value_column_counter: str = "value_integer",
value_column_part: str = "value_string"
) -> pd.DataFrame
Calculate quality metrics by part number.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
ok_counter_uuid
|
str
|
UUID for good parts counter |
required |
nok_counter_uuid
|
str
|
UUID for defective parts counter |
required |
part_id_uuid
|
str
|
UUID for part number signal |
required |
value_column_counter
|
str
|
Column containing counter values |
'value_integer'
|
value_column_part
|
str
|
Column containing part numbers |
'value_string'
|
Returns:
| Type | Description |
|---|---|
DataFrame
|
DataFrame with quality by part: |
DataFrame
|
|
DataFrame
|
|
DataFrame
|
|
DataFrame
|
|
DataFrame
|
|
DataFrame
|
|
Example
quality_by_part('good', 'bad', 'part_id') part_number ok_parts nok_parts total_parts nok_rate_pct first_pass_yield_pct 0 PART_A 1255 55 1310 4.2 95.8 1 PART_B 890 38 928 4.1 95.9
nok_by_reason ¤
nok_by_reason(
nok_counter_uuid: str,
defect_reason_uuid: str,
*,
value_column_counter: str = "value_integer",
value_column_reason: str = "value_string"
) -> pd.DataFrame
Analyze NOK parts by defect reason.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
nok_counter_uuid
|
str
|
UUID for defective parts counter |
required |
defect_reason_uuid
|
str
|
UUID for defect reason signal |
required |
value_column_counter
|
str
|
Column containing counter values |
'value_integer'
|
value_column_reason
|
str
|
Column containing reason codes |
'value_string'
|
Returns:
| Type | Description |
|---|---|
DataFrame
|
DataFrame with NOK by reason: |
DataFrame
|
|
DataFrame
|
|
DataFrame
|
|
Example
nok_by_reason('bad_parts', 'defect_reason') reason nok_parts pct_of_total 0 Dimension_Error 45 40.5 1 Surface_Defect 28 25.2 2 Wrong_Color 22 19.8
daily_quality_summary ¤
daily_quality_summary(
ok_counter_uuid: str,
nok_counter_uuid: str,
*,
value_column: str = "value_integer"
) -> pd.DataFrame
Daily quality summary.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
ok_counter_uuid
|
str
|
UUID for good parts counter |
required |
nok_counter_uuid
|
str
|
UUID for defective parts counter |
required |
value_column
|
str
|
Column containing counter values |
'value_integer'
|
Returns:
| Type | Description |
|---|---|
DataFrame
|
DataFrame with daily quality: |
DataFrame
|
|
DataFrame
|
|
DataFrame
|
|
DataFrame
|
|
DataFrame
|
|
DataFrame
|
|
Example
daily_quality_summary('good', 'bad') date ok_parts nok_parts total_parts nok_rate_pct first_pass_yield_pct 0 2024-01-01 1255 55 1310 4.2 95.8 1 2024-01-02 1308 42 1350 3.1 96.9 2 2024-01-03 1290 60 1350 4.4 95.6