SP Lab Simulation with EVE-NG and PNETLab

Service-provider networking is a craft you learn by configuring, breaking, and fixing — not by reading. This chapter covers the two mainstream emulator platforms (EVE-NG and PNETLab), realistic hardware requirements, image sourcing strategy, a reference SP topology to build, and a curated set of lab exercises that exercise every concept from the companion chapters. It closes with a multi-month learning path that compresses roughly a new SP engineer’s first year into structured self-study.

Highlight: Key Insight Reading teaches vocabulary; labs teach instincts. Both matter, but only the second lets you troubleshoot a production network at 03:00.

The emulator landscape

EVE-NG and PNETLab are web-based network emulators. Both run real vendor router images — Cisco IOS/IOS-XE/IOS-XR, Juniper Junos, Arista EOS, Nokia SR OS, Huawei VRP, and others — as virtual machines or containers. Topology editing happens in a browser; nodes are wired with virtual links; devices are consoled via Telnet, SSH, or VNC. The traffic on simulated links is real protocol traffic: actual BGP UPDATE messages, genuine OSPF LSAs, legitimate MPLS label operations. What you learn here transfers directly to physical hardware.

Under the hood, both platforms share the same substrate. QEMU/KVM runs full virtual machines; Docker runs lightweight appliances; Dynamips runs legacy Cisco images. The platforms wrap that substrate with a topology editor, a console multiplexer, and virtual bridging.

Cards: 2

EVE-NG

The older, more established platform. The Community edition is free and supports most vendor images. The Professional edition is paid and adds multi-user collaboration, hot-linking of running nodes, configuration export, and an enhanced GUI [8].

PNETLab

A free fork that has grown its own ecosystem. Feature parity with EVE-NG Professional at zero cost. Ships with a Lab Store — a community catalogue of pre-built topologies ready to download and run.

Platform comparison

DimensionEVE-NG CommunityEVE-NG ProfessionalPNETLab
CostFreePaid licence [8]Free
Image supportMost vendorsMost vendorsMost vendors
Multi-user collabNoYes [8]Yes
Config exportManualBuilt-in [8]Built-in
Lab marketplaceCommunity-maintainedCommunity-maintainedBuilt-in Lab Store
SubstrateQEMU/KVM, Docker, DynamipsSameSame

Hardware and deployment

Meaningful SP labs — multi-PE MPLS with full IS-IS + LDP + MP-BGP — are not cheap in host resources. IOS-XR images alone are 4–6 GB each on disk [7], and the resource cost depends sharply on which XR variant you run: the lightweight XRv is control-plane-only and runs in roughly 3 GB of RAM, while XRv9000 (the data-plane-capable image) needs 14–16 GB RAM and 4 vCPUs per running instance [7] (per the Cisco IOS-XRv 9000 Router datasheet). Pick the variant that matches what the lab actually exercises — there is no point burning 16 GB on a P-router that only forwards labels.

ResourceMinimumRecommendedWhy it matters
CPU cores812+Each router VM pins at least one vCPU; nested hypervisor needs headroom
VirtualisationIntel VT-x / AMD-V enabled in BIOSSameKVM cannot run vendor images without hardware virtualisation
RAM32 GB (lightweight XRv / cRPD / FRR only)64–96+ GBFour XRv9000 PEs alone need ≥64 GB (16 GB × 4) [7]; add two P routers and an RR and you are well past 80 GB. Drop to control-plane-only XRv or to cRPD/FRR if you must fit inside 32 GB.
Storage500 GB SSD1 TB NVMeImage library plus lab snapshots; SSD latency affects boot times
Host OSUbuntu Server 20.04/22.04 bare metalSame or nested under ESXi/WorkstationBoth platforms ship as Ubuntu appliances

Deployment options in practice: a beefy laptop for light labs, a dedicated lab server for serious work, or a cloud VM on a provider that supports nested virtualisation. Google Cloud, AWS, and specialists such as Equinix Metal offer bare-metal instances where nested KVM works. Consumer-grade cloud VMs generally do not.

The image problem

Neither platform ships with vendor images. Cisco, Juniper, Nokia, and others do not license their images for redistribution, so the images must be sourced separately.

SourceCostLicensingNotes
Cisco Modeling Labs (CML)Tiered: CML-Free, CML-Personal, CML-Personal Plus, CML-Enterprise [5, 6]Full licence on each tierCML-Free caps the topology at 5 nodes and forces telemetry on [5]; it bundles IOLv / IOLv2 / ASAv. XRv9000, XRd, and Cat8000v are downloadable separately and consume the node cap [6].
Cisco DevNet SandboxFreeCloud-hostedNo local setup; remote access to real devices
Juniper vLabsFree with a juniper.net account [12]Per-uservJunos-Switch, vJunos-Router, and vJunos Evolved are free downloads; the dataplane is throttled to roughly 100 Mbps aggregate per instance [12].
Arista vEOSFreePer-user after registrationFreely downloadable
Nokia SR Linux (container)FreeNo registrationContainer is freely pullable from the Nokia GitHub registry [13]; unlicensed image is capped at ~1000 pps and a 2-week runtime ceiling per process [13]. The classic SR OS vSIM remains gated behind a contract / licence.
FRRouting (FRR)FreeOpen source (GPL)Production-grade, no licensing issues
BIRDFreeOpen sourceProduction-grade routing daemon
VyOSFreeOpen sourceFull router distribution

Highlight: Warning Community forums frequently share vendor images that were never licensed for redistribution. Downloading them is a licence violation. Most published labs assume the reader already holds the images through legitimate channels. A full SP lab built entirely on open-source stacks (FRR, BIRD, VyOS) is a legal, capable alternative.

A realistic SP reference topology

The natural starting project is to reproduce the Tier-1 reference topology from [[01-tier1-sp-architecture-l3vpn]] — four PEs, two P routers, one RR, customer edges attached as needed.

graph LR
    CE1[CE1] --- PE1
    CE2[CE2] --- PE2
    CE3[CE3] --- PE3
    CE4[CE4] --- PE4
    PE1 --- P1
    PE2 --- P1
    PE3 --- P2
    PE4 --- P2
    P1 --- P2
    PE1 -. iBGP .-> RR
    PE2 -. iBGP .-> RR
    PE3 -. iBGP .-> RR
    PE4 -. iBGP .-> RR
    subgraph Core
      P1
      P2
      RR
    end
    subgraph Edge
      PE1
      PE2
      PE3
      PE4
    end

Node role assignment

RoleCountSuggested imageResponsibility
PE4Cisco IOS-XR (XRv9000 if RAM allows)IS-IS, MPLS-TE, MP-BGP VPNv4 and L2VPN-EVPN
P2XRd Control-Plane, Juniper cRPD, or FRR [9] (legitimate, redistributable images — vIOS is not licensed for redistribution)IS-IS and LDP only — pure transit
RR1IOS-XR or JunosControl plane only; small footprint
CEAs neededLinux container + FRR [9]Per-service customer endpoints
  • All core links point-to-point, 1 Gbps simulated bandwidth.
  • Loopback 0 on every node; NET addresses derived from the loopback.
  • IS-IS Level-2 only, single area.
  • MPLS LDP on every core-facing interface.
  • RR peers every PE via loopback for both VPNv4 and L2VPN-EVPN address families.

Once the IGP and MPLS underlay are stable, layer the services from [[02-sp-services-dia-l2vpn-vpls-evpn]] one at a time: L3VPN, then DIA, then VPWS, then VPLS, then EVPN. Each service exercises a distinct piece of the architecture.

The killer labs

These six exercises each isolate a single, high-value concept. They are ordered by conceptual dependency — Lab 1 assumes a bare IGP; Lab 6 assumes everything before it.

LabConcept exercisedTarget metricCompanion chapter
1IS-IS convergence with and without BFDReconvergence time drops from seconds to milliseconds[[01-tier1-sp-architecture-l3vpn]]
2iBGP loop via AD misconfigurationPacket loop visible on interface counters; disappears when iBGP AD set to 200[[03-bgp-operations]]
3L3VPN with overlapping customer address spaceTwo customers on 192.168.1.0/24 stay isolated via RD[[02-sp-services-dia-l2vpn-vpls-evpn]]
4Route reflector failure and dual-RR redundancyAll VPNs break on single-RR kill; survive when second RR added with identical cluster-id[[01-tier1-sp-architecture-l3vpn]]
5RSVP-TE tunnel with FRR link protection [3]Packet-loss gap on protected-link failure measurably under 50 ms [3]MPLS-TE chapter
6EVPN active-active multi-homingBoth PEs forward known unicast; only DF sends BUM; sub-second mass-withdraw convergence[[02-sp-services-dia-l2vpn-vpls-evpn]]

Lab 1 — IS-IS convergence

Build the seven-node topology, bring IS-IS to a fully-converged state, then shut a core link (for example between P1 and P2). Watch the syslog for LSP regeneration and SPF recalculation; measure the reconvergence time. Re-run with BFD enabled on the same link. The convergence window drops from seconds to tens of milliseconds — the practical demonstration of why SPs deploy BFD.

Lab 2 — iBGP loop reproduction

Reproduce the mutual-redistribution iBGP loop. Set the iBGP administrative distance to 30, break one of the eBGP sessions, and observe packets loop on the core links via interface counters or a traceroute from a customer endpoint. Then raise iBGP AD back to 200. The loop disappears. This lab is the single fastest way to internalise why the default iBGP AD is 200 rather than equal to eBGP’s 20.

Lab 3 — L3VPN with overlapping address space

Configure two customer VRFs, both using 192.168.1.0/24 internally. Verify that route distinguisher prepending keeps them separate in the global VPNv4 table. Then deliberately misconfigure a route target on one side and watch the service collapse. Trace through the logs to find the mistake — this is exactly the troubleshooting workflow a production engineer runs.

Lab 4 — Route reflector failure

Deploy a single RR. Peer every PE through it. Kill the RR. Observe every VPN service break simultaneously. Add a second RR with an identical cluster-id and re-test. The lab makes visceral why every serious SP deploys at least two RRs.

Lab 5 — RSVP-TE tunnel with FRR

Build an RSVP-TE tunnel from PE1 to PE4 along an explicit path. Enable link protection. Initiate a continuous ping through the tunnel, then manually fail the protected link. Measure the packet-loss window. With FRR functioning, the gap is under 50 ms — the industry benchmark for carrier-grade protection [3].

Lab 6 — EVPN multi-homing

Connect a single CE to two PEs with a shared Ethernet Segment Identifier. Verify that both PEs forward known-unicast traffic (active-active) and that only the designated forwarder (DF) transmits BUM traffic. Disconnect one PE; measure the convergence time. Mass-withdraw semantics should deliver sub-second recovery — orders of magnitude better than VPLS BUM rebuild.

Effective lab practice

Highlight: Tip The goal of lab work is not to build working networks. It is to recognise broken ones and fix them fast. That is what SP operations does every day.

PracticeWhy it matters
Snapshot disciplineBoth platforms save configured state [8]. Establish a known-good baseline before every experiment; restore in seconds rather than rebuilding from scratch.
Git-versioned configsCopy each router’s running-config into a repository at every checkpoint. Weeks later, you can diff and restore exact working states. This is also how lab guides get published [14].
Narrate aloudSpeaking each configuration line out loud — “IS-IS on loopback passive so the router-id advertises but no adjacency forms” — exposes gaps in understanding. If you cannot narrate it, look it up before moving on.
Deliberate breakageEvery configuration has failure modes. Shut interfaces, misconfigure RTs, reload mid-convergence, corrupt credentials. Build the reflex for fast diagnosis.
Incremental growthStart with three routers and OSPF. Add MPLS. Add BGP. Add a VRF. Add a second customer. Each step teaches one concept in isolation. The full seven-node topology is the reward after many weeks, not the day-one objective.

Alternative platforms worth knowing

EVE-NG and PNETLab are the common default, but the wider ecosystem has other tools that fit specific workflows.

ToolStrengthsBest for
GNS3Predecessor to EVE/PNETLab; integrates with VirtualBox and VMware; smaller built-in device catalogueMixed environments where existing hypervisors are already deployed
ContainerlabText-file-driven topology definitions; Docker-based; very lightweight. Native kinds include Nokia SR Linux [10], Cisco XRd, Juniper cRPD, Juniper cSRX, Arista cEOS, SONiC, VyOS, plus vrnetlab integration that wraps full VM-based NOSes (vMX, XRv9000, etc.) into a clab-launchable container [9].CI/CD pipelines, network-automation testing, reproducible lab definitions
Cisco Modeling Labs (CML) Free TierOfficial Cisco platform with licensed IOS XR and IOS XE images (5-node cap, telemetry forced on) [5]Legal, licensed Cisco-centric labs without third-party image concerns
Netlab (Ivan Pepelnjak)YAML-driven topology generator; defines topologies once and emits configs for the underlying provider. Supported providers are libvirt/KVM, clab (Containerlab), virtualbox, and external — there is no native EVE-NG or CML provider [11].Practising specific scenarios without boilerplate; rapid iteration

Highlight: Note Containerlab and Netlab together are the most productive stack for anyone studying SP networking with open-source routing daemons [9, 11]. A full multi-PE MPLS topology can be spun up from a 50-line YAML file in under a minute and torn down just as fast.

The most common mistake is trying to build the full Tier-1 topology on day one. The progression below reflects roughly what a new SP engineer absorbs in the first 6–12 months on the job, compressed into structured self-study. Each phase builds on the previous.

flowchart TD
    A[Weeks 1-2<br/>Infrastructure basics<br/>3-router OSPF] --> B[Weeks 3-4<br/>BGP<br/>Two-AS eBGP, policy]
    B --> C[Weeks 5-6<br/>MPLS foundations<br/>LDP, TE tunnels]
    C --> D[Weeks 7-10<br/>VPN services<br/>L3VPN, VPWS, EVPN]
    D --> E[Weeks 11-12<br/>Failure modes<br/>BFD, FRR, convergence]
    E --> F[Weeks 13+<br/>Specialty topics<br/>MVPN, QoS, SRv6, inter-AS]
PhaseFocusConcrete deliverable
Weeks 1–2Infrastructure and IGPEVE-NG/PNETLab running; 3-router OSPF topology; neighbours up; loopback pings succeed
Weeks 3–4BGPTwo-AS eBGP setup; route filtering, AS-path prepending, local-pref manipulation; connected-check and disable-connected-check reproduced
Weeks 5–6MPLS underlayLDP enabled across core; LSPs verified between loopbacks; at least one RSVP-TE tunnel with an explicit path
Weeks 7–10VPN servicesVRFs defined; L3VPN for one customer, then two; VPWS; EVPN. Budget a week per service to really understand it
Weeks 11–12Failure modesSystematic breakage; convergence measurement; BFD and FRR [3] deployed; understand what each protects
Weeks 13+Specialty topicsMulticast/MVPN, QoS classification and policing, BGP communities and policy, inter-AS VPN Options A/B/C, SRv6 if ambitious

By the end of the sequence the mental layering becomes automatic: physical hardware → IGP → MPLS → iBGP → VPN services → customer integration. That layered model is the foundation for every SP from a small regional ISP to a Tier-1 carrier.

See Also

References

  1. RFC 6241Network Configuration Protocol (NETCONF). R. Enns et al., IETF, June 2011. https://www.rfc-editor.org/rfc/rfc6241
  2. RFC 7950The YANG 1.1 Data Modeling Language. M. Bjorklund (Ed.), IETF, August 2016. https://www.rfc-editor.org/rfc/rfc7950
  3. RFC 4090Fast Reroute Extensions to RSVP-TE for LSP Tunnels. P. Pan, G. Swallow, A. Atlas (Eds.), IETF, May 2005. https://www.rfc-editor.org/rfc/rfc4090
  4. OpenConfig gNMI specification. https://github.com/openconfig/reference/blob/master/rpc/gnmi/gnmi-specification.md
  5. Cisco — Cisco Modeling Labs (CML-Free Tier). https://developer.cisco.com/docs/modeling-labs/cml-free/
  6. Cisco — CML Resource Limits. https://developer.cisco.com/docs/modeling-labs/resource-limits/
  7. Cisco — IOS-XRv 9000 Router Datasheet (C78-734034). https://www.cisco.com/c/en/us/products/collateral/routers/ios-xrv-9000-router/datasheet-c78-734034.html
  8. EVE-NG — Community vs Professional Feature Comparison. https://www.eve-ng.net/index.php/features-compare/
  9. Containerlab — Supported Kinds. https://containerlab.dev/manual/kinds/
  10. Containerlab — Nokia SR Linux Kind. https://containerlab.dev/manual/kinds/srl/
  11. Netlab — Providers (libvirt, clab, virtualbox, external). https://netlab.tools/providers/
  12. Juniper — vJunos Labs. https://www.juniper.net/us/en/dm/vjunos-labs.html
  13. Nokia — SR Linux Container Image (GitHub). https://github.com/nokia/srlinux-container-image
  14. J. Edelman, S. S. Lowe, M. Oswalt, Network Programmability and Automation, 2nd ed., O’Reilly, 2023.