“Don’t worry, we have backups.”
We hear that line a lot.
Unfortunately, we also hear versions of this one:
“We thought we had backups… until we needed them.”
Ransomware has changed the way we need to think about backups. Attackers increasingly try to encrypt or delete backups first, then go after production systems. If they succeed, you’re left with bad options: pay, rebuild from scratch, or face extended downtime.
The good news: you don’t need a dozen new tools to dramatically improve your position. You need backups that are designed to survive a bad week, not just check a box.
Here’s what that looks like in practice for Milwaukee-area organizations.
“We have backups” vs. “We can actually recover”
On paper, both statements sound similar. In reality, they’re worlds apart.
Typical backup setup:
- A backup job runs every night
- Logs say “Success”
- Nobody touches it unless there’s a problem
ackups that actually save you when ransomware hits:
- Are protected from tampering
- Live in more than one place
- Are tested regularly, not just assumed
Let’s break that down.
1. Immutability: Backups that can’t be encrypted or deleted
If an attacker gets access to your backup console or storage, and they can:
- Change retention
- Delete backups
- Encrypt backup storage
…then your “safety net” disappears right when you need it most.
That’s where immutable backups come in. In plain English, immutability means:
“This backup cannot be changed or deleted for a set period of time, even by an admin.”
Different platforms call it different things (object lock, WORM, retention lock), but the goal is the same:
- Protect at least one recent copy of your critical data
- Prevent attackers (or well-meaning admins) from wiping it out during an incident
For many small and mid-sized businesses, this is one of the highest impact changes you can make. It doesn’t fix everything, but it dramatically improves your odds of recovering without paying a ransom.
2. Separation: Not all eggs in one (network) basket
We still see environments where:
- The backup server lives on the same network as everything else
- Backups write to a storage device that’s always online and always accessible
- There’s no meaningful separation between production and protection
In a ransomware event, that’s like having your fireproof safe made out of cardboard.
A stronger approach usually includes:
- At least one copy off the primary network
- Cloud backups with restricted access
- Replication to a different environment or location
- Limited access to backup management consoles
- Strong identity and MFA for admin accounts
- No shared logins or “set it and forget it” credentials
- Network segmentation
- Backup infrastructure is not treated the same as everyday user machines
The goal is simple: even if production systems are compromised, at least one set of backups remains untouched.
3. Tested restores: Screenshots, not assumptions
The most painful calls we take usually involve some version of:
“The backup jobs all said ‘successful,’ but the restore failed.”
Reasons vary:
- The wrong folders or databases were selected
- Certain systems were excluded to save space
- Backups were corrupted and nobody noticed
- The right credentials or encryption keys weren’t recorded
The fix is not glamorous, but it’s effective:
- Run test restores on a schedule
- Pick a few representative systems like file shares, a key application, or maybe a VM
- Restore them to a safe environment
- Confirm they work as expected
- Document what you did
- Keep simple evidence (screenshots, short notes) that you can show leadership or insurers
- Adjust based on what you learn
- If a restore is slow, missing, or painful, change your backup strategy before it’s an emergency
We like to say:
“If it hasn’t been restored, it isn’t really backed up.”
4. Priorities: Not everything needs to come back at once
In the middle of a ransomware event, everything feels urgent. But not everything is equally important in the first 24 hours.
A good backup and recovery plan answers:
- Which systems must be restored first to keep the business moving?
- Email? Line-of-business app? File shares? ERP?
- Which teams are most impacted by downtime?
- Can some departments operate on paper or manual workflows for a few days while you focus on others?
- What’s our acceptable recovery point?
- Are we okay losing a day of data on some systems but only an hour on others?
Once those decisions are made, you can:
- Allocate backup resources where they matter most
- Choose backup frequencies that match business impact
- Communicate realistic expectations during a crisis
This is where IT and leadership need to be aligned ahead of time, not arguing about priorities in the middle of an incident.
Questions to ask your IT team or MSP this month
You don’t have to become a backup expert overnight. Start with questions like:
- “Do we have any immutable or retention-locked backups today?”
- “Where do our most critical backups live, and who can access them?”
- “When was our last test restore, and what did we learn from it?”
- “If we were hit with ransomware tomorrow, which systems would we bring back first?”
If the answers are fuzzy or rely heavily on “we think,” that’s your cue to push a little deeper.
Where Stamm Tech fits
For many Milwaukee businesses, backups have grown organically over the years:
- A solution added for one system
- Another tool added after a hardware refresh
- A cloud backup tacked on for good measure
We help untangle that picture and build something more intentional:
- A clear map of what’s being protected and how
- Immutability and separation where it matters most
- A simple, repeatable restore-testing process
- Documentation leadership and insurers can understand
You don’t need perfection on day one. You need a backups story that moves you from “we hope it’s fine” to “we know what will happen if we ever need it.”
If you’d like a 30-minute backup sanity check, we’re happy to walk through what you have today and suggest a few practical next steps. No scare tactics, no surprise homework.
FAQ
Q1: How often should we back up our data?
A: It depends on how much data you can afford to lose. Many SMBs land on daily backups for most systems and more frequent snapshots (hourly or better) for critical apps and databases. The right answer is tied to your “recovery point objective” (RPO), how far back in time you’re comfortable rolling if you have to restore.
Q2: Do we really need immutable backups if we already use a reputable backup vendor?
A: If you’re worried about ransomware or malicious deletion, yes. Immutability adds a safety layer on top of your existing backup software. Think of it as a lock on the lock. If someone gets into your backup console, they still can’t erase or encrypt certain protected copies.
Q3: How often should we test our restores?
A: At least a couple of times a year for most organizations, and more often for truly critical systems. A good rhythm is quarterly spot-checks: pick a few systems, perform real restores to a safe environment, and confirm they work. The goal is to discover problems on a quiet Tuesday, not during a ransomware Friday.
Q4: What’s the difference between “on-site,” “off-site,” and “cloud” backups?
A:
- On-site: Backups stored in your building or local network; fast restores but vulnerable to local disasters or ransomware.
- Off-site: Backups stored in a separate location (another office, colocation, etc.).
- Cloud: Backups stored in a cloud platform or provider’s data center.
A strong strategy usually combines at least two of these so you’re not relying on a single location or method.
Q5: How do we decide which systems to restore first after an incident?
A: Start with the processes that keep the business moving: communication (email, key collaboration tools), revenue-critical apps (ERP, line-of-business systems), and data needed for day-to-day operations. That prioritization should be agreed on before an incident so IT isn’t guessing under pressure.
Q6: Can Stamm Tech work with our existing backup tools, or do we have to replace everything?
A: In many cases, we can work with what you already have; tuning retention, adding immutability, tightening access, and building a restore-testing process. Sometimes we’ll recommend changes or replacements, but we always start with, “Can we make the current setup safer and more usable first?”