Solar SCADA Architecture and Control Signals for Utility-Scale PV
Solar SCADA Architecture and Control Signals for Utility-Scale PV
Utility-scale solar plants aren’t “build it and forget it.” Once a project is tied to the grid, the SCADA system becomes the operational backbone that proves performance, supports grid compliance, and keeps data trustworthy for years.
This guide is for EPCs, owners/operators, commissioning leads, SCADA/DAS engineers, and O&M teams who need a clear picture of how solar SCADA is typically structured, what control signals matter most, and how to validate everything end-to-end so COD doesn’t turn into months of cleanup.
SCADA vs DAS: a quick, practical definition
DAS (Data Acquisition System) focuses on collecting and reporting data: irradiance, inverter metrics, meters, weather sensors, and alarms. It answers, “What is happening?”
SCADA (Supervisory Control and Data Acquisition) includes data acquisition but adds supervisory control functions used to meet grid requirements and operating procedures. It answers, “What is happening, and what are we commanding the plant to do?”
In practice, many projects use both terms loosely. The simplest rule that holds up in the field: if a system is issuing plant-level commands (or must support utility/ISO control), you are in SCADA territory.
What “architecture” means in solar SCADA
Solar SCADA architecture is the full chain from field devices to the operator interface and (often) to a remote utility/ISO endpoint. It includes hardware, networks, protocols, time sync, cybersecurity boundaries, and the logic that turns setpoints into plant behavior.
Most commissioning pain comes from gaps between these layers: good devices but weak networks, correct networks but wrong scaling, correct signals but missing documentation, or correct everything but no repeatable test procedure.
Typical utility-scale solar SCADA reference architecture
While every project differs by utility requirements and vendor stack, many utility-scale PV sites follow a similar structure.
1) Field layer: devices that generate data and accept commands
- Inverters (string or central) and inverter controllers
- Tracker controllers (for single-axis tracking plants)
- Revenue-grade meter(s) and plant meters
- Weather/MET sensors (pyranometers, reference cells, wind, ambient/module temps)
- Protection relays and substation devices (where applicable)
- Power Plant Controller (PPC) or plant controller functions (sometimes embedded in another platform)
2) Network layer: the plant data highway
This includes copper and fiber media, switches, routers/firewalls, patching, and segmentation. For utility-scale PV, fiber is common for long runs and electrical noise immunity. Managed switching and proper segmentation (subnets/VLANs) are how you prevent a “flat” network from becoming fragile or un-troubleshootable.
- Managed Ethernet switches at power conversion stations (PCS) / inverter skids
- Fiber rings or star topologies (depending on design)
- VLANs/subnets to isolate traffic (e.g., operations vs utility access)
- Remote connectivity via carrier circuits, cellular routers, or utility-provided comms
3) Control & compute layer: where logic and data services run
- SCADA server(s) and historian (on-prem, hosted, or hybrid)
- RTU/RTAC (remote terminal / automation controller) for utility-facing protocols and I/O handling
- PPC logic (Volt/VAR, frequency-watt, curtailment, ramp rates) if required
- Time synchronization (commonly NTP; sometimes GPS-based time sources)
4) Presentation layer: how humans operate the plant
- HMI (human-machine interface) dashboards
- Alarm management and notification (email/SMS/SCADA alerts)
- Reports for compliance, performance, and O&M
5) Utility/ISO interface layer: signals that prove compliance
Utilities commonly require a defined set of telemetry points and a defined set of control capabilities, plus an agreed protocol and test plan. That interface is where many last-minute commissioning issues surface.
Common communication protocols you’ll see (and why they matter)
Protocols matter because they affect testing tools, troubleshooting workflows, and the “truth” of your signals when multiple gateways are involved.
- Modbus (RTU/TCP): common for inverter, meter, weather station, and tracker data on the plant LAN.
- DNP3: frequently used for utility telemetry and control because it is designed for SCADA use cases.
- IEC 60870-5-104: also common in utility telecontrol environments.
Many architectures use Modbus internally and DNP3 or IEC 104 at the utility boundary via an RTU/RTAC gateway.
Control signals that typically matter for interconnection
Exact points vary by interconnection agreement, utility standards, and regional grid rules. Still, most requirements fall into a few practical categories: active power control, reactive power/voltage support, availability/curtailment states, and verified measurements.
Plant active power commands
- Plant MW (kW) setpoint or percent setpoint
- Ramp rate limits (up/down)
- Start/stop or enable/disable generation (implementation varies)
- Fixed curtailment modes (e.g., a limit during peak irradiance windows)
Reactive power and voltage support
- Power factor setpoint
- VAR setpoint or voltage setpoint (depending on PPC strategy)
- Volt/VAR curves and modes (utility-specific)
- Voltage ride-through settings are typically configured at device/controller level and verified in tests (not “toggled” daily)
Frequency response modes (grid support)
- Frequency-watt or droop modes
- Frequency trip/ride-through settings (configured, then validated)
Status/telemetry points utilities often require
- Net plant MW and MVAR at POI (point of interconnection)
- Voltage, frequency at POI
- Breaker/status indications (substation dependent)
- Plant availability / curtailment state / PPC mode
- Communications health and data quality indicators
A commissioning-focused way to think about signals: “measurement, meaning, control”
If a point is going to be used operationally (or contractually), validate it through three lenses:
- Measurement: Is the underlying device reading correct (meter class, scaling, calibration, wiring)?
- Meaning: Is the point defined consistently end-to-end (units, sign conventions, scaling, timestamps, quality flags)?
- Control: When a command is issued, does the plant respond predictably and within agreed tolerances?
This framing prevents a common failure mode: “The point moves on the HMI, so it must be right.” Movement is not validation.
End-to-end validation checklist (COD-minded)
The goal is to prove each signal and command across the full chain: field device → network → controller/RTU → SCADA/HMI → utility endpoint (if applicable). Below is a practical checklist you can adapt to your site test plan.
1) Start with a point list that’s actually testable
Before field testing, lock a point list that includes: name, description, device source, register/address, scaling, units, expected ranges, and whether it is telemetry, alarm, or control. If the utility has a required point list, map each required point to a plant source and document the transformation.
2) Validate time and data integrity early
- Confirm NTP is working and consistent across servers/controllers.
- Verify timestamps and historian logging intervals match requirements.
- Check for duplicate points, stale values, and “good” quality flags on bad data.
3) Prove the network (not just link lights)
Network issues are a top driver of intermittent data gaps. Validate the architecture with evidence, including switch configurations and fiber test results.
- Managed switch configuration backups (as-built)
- VLAN/subnet documentation and IP plan
- Port maps for key switches
- Latency/packet loss checks where practical
For fiber, keep baseline documentation. OTDR traces act like a fingerprint of the cable at turnover.
4) Test telemetry points with known-good references
- Compare revenue meter values against SCADA values (units and sign conventions).
- For irradiance sensors, confirm scaling and orientation (POA vs GHI) and ensure the signal is stable and plausible.
- For inverter power, compare inverter-reported power vs aggregated plant power to catch mapping/scaling errors.
5) Test control signals in a safe, staged way
Controls should be tested using a written procedure with roles, prerequisites, and rollback steps. Many projects start with small step changes and ramp gradually.
- Verify command path to PPC/controller (protocol, addressing, permissions).
- Issue a small active power setpoint change and verify response time and stability.
- Validate ramp behavior (does the plant follow the ramp limit or overshoot?).
- Test reactive power or power factor commands with measured confirmation at POI.
- Confirm alarms/events are generated and logged when expected.
6) Validate alarms like an operator (not like an installer)
Alarm floods and nuisance alarms hurt response times after COD. Build an alarm strategy that includes:
- Clear priorities (critical vs warning vs informational)
- Defined setpoints and deadbands
- Meaningful text (what happened and what to check)
- Notification rules (who gets what, when)
7) Deliver documentation that future teams can use
Turnover packages should support troubleshooting months later, not just pass a final inspection. A solid package often includes:
- As-built network drawings and IP plan
- As-built fiber route and test results (OTDR/OLTS)
- Device configurations and firmware versions
- Point list with scaling and source mapping
- Commissioning test results and signed procedures
Architecture choices and trade-offs (what to decide early)
| Decision | Why it matters | Common trade-off |
|---|---|---|
| Flat network vs segmented (VLANs/subnets) | Controls broadcast traffic, limits blast radius, improves troubleshooting | Segmentation requires stronger documentation and switch configuration discipline |
| Fiber ring vs star topology | Redundancy and fault tolerance | Rings add complexity; star can be simpler but may create single points of failure |
| PPC location (dedicated vs integrated) | Defines where setpoints are enforced and how fast controls respond | Integrated solutions can reduce components but complicate vendor boundaries |
| Protocol mapping (Modbus internally, DNP3/IEC at boundary) | Determines how points are translated and tested end-to-end | More gateways can mean more scaling/sign errors if point discipline is weak |
Common failure modes (and how to prevent them)
“It worked yesterday” data gaps
Often caused by marginal fiber terminations, dirty connectors, poor splices, unmanaged switches, or power quality issues in field enclosures. Baseline fiber testing and managed network standards reduce these intermittent issues.
Scaling/sign convention mistakes at the POI
A frequent issue is MW/MVAR sign conventions and scaling differences between devices, SCADA, and utility telemetry. Prevent this by documenting sign conventions and validating against meter measurements during tests.
Command works at the controller but not at the plant
This is usually a mapping/permissions issue (wrong register, wrong mode, command inhibited) or a boundary condition (plant not in the correct state to accept commands). Test command prerequisites explicitly and log plant mode/state during testing.
Alarm floods that train operators to ignore alarms
If every warning becomes a page, nothing is actionable. Apply deadbands, rationalize priorities, and tune alarm thresholds after initial data is stable.
How to measure SCADA success after COD
- Data availability: percent of time critical points are present and “good quality”
- Mean time to isolate comms faults: how quickly teams can locate the failure domain (device vs network vs server)
- Alarm effectiveness: ratio of actionable alarms to nuisance alarms
- Control performance: repeatable response to MW/PF/VAR commands within required tolerances
- Documentation completeness: ability for O&M to troubleshoot without rebuilding the story from scratch
Conclusion: build it commissioning-ready, not “commission it later”
Utility-scale PV SCADA is a system-of-systems: devices, network, logic, HMI, and a utility-facing boundary. When architecture and signals are treated as an integrated scope—with disciplined point definitions, validated comms, and clean documentation—commissioning moves faster and operations get reliable data from day one.
If you’re preparing for commissioning or cleaning up unreliable data after COD, REIG can help you validate signals end-to-end (including fiber verification), tighten network architecture, and deliver a commissioning-ready SCADA + DAS handoff that operators can trust.
FAQ
What is the difference between SCADA and DAS on a solar PV plant?
A DAS primarily collects and reports data (monitoring). SCADA includes data acquisition but also supports supervisory control functions such as curtailment, power factor/VAR control, and utility/ISO command pathways. If the system must execute plant-level setpoints or meet grid control requirements, it functions as SCADA even if teams call it “DAS” informally.
Which control signals are most important for utility interconnection?
Most utilities focus on active power control (MW or percent setpoint, ramps, enable/disable behaviors) and reactive power/voltage support (power factor, VAR or voltage setpoints, mode/status). They also commonly require POI telemetry such as MW, MVAR, voltage, and frequency, plus plant/PPC status. The exact list and test tolerances should be taken from the interconnection agreement and utility standards.
How do you validate SCADA points end-to-end during commissioning?
Start with a point list that includes source, addressing, scaling, units, and expected ranges. Validate the network and time sync, then test telemetry against known-good references such as revenue meters and device local readings. For control points, use a staged procedure: small setpoint changes, verify plant response at the POI, confirm ramp behavior, and ensure alarms/events log correctly.
Why do fiber OTDR and OLTS results matter for solar SCADA networks?
OTDR traces and OLTS loss measurements document the health of fiber links and help isolate faults quickly when issues occur later. They provide baseline evidence of splice quality, connector issues, bends, or breaks and support warranty and turnover requirements. In long-run utility-scale sites, fiber documentation can prevent extended downtime caused by “invisible” comms failures.
What documentation should be included in a SCADA turnover package?
A strong package typically includes as-built network drawings, an IP plan and VLAN/subnet mapping, switch configuration backups, and a clean point list with scaling and source mapping. Include fiber routes and OTDR/OLTS results, device configurations and firmware versions, and signed commissioning test procedures/results. This enables O&M teams to troubleshoot without recreating the design intent.
Further reading
References
- NIST: Guide to Industrial Control Systems (ICS) Security (SP 800-82 Rev. 2)
- U.S. DOE: 21 Steps to Improve Cybersecurity of SCADA Networks
- IEEE Std 1547-2018: Interconnection and Interoperability of Distributed Energy Resources
- NERC: Critical Infrastructure Protection (CIP) Standards (landing page)
- IEC 60870-5-104 Overview (IEC standard entry)
Next step
Need to prove SCADA signals end-to-end for COD (or stabilize plant data after turnover)? REIG integrates utility-scale solar SCADA + DAS from network architecture through commissioning, including fiber verification and clean documentation. Share your point list and utility requirements and we’ll help you build a commissioning-ready path to reliable data and dependable controls.
