Skip to content
100% in your browser. Nothing you paste is uploaded — all processing runs locally. Read more →

Five timestamp formats that have caused real production outages

7 min read #epoch #unix #timestamps #incidents

I’ve been collecting timestamp-related production incidents for about five years. The pattern is depressingly consistent: a developer assumes one format, the system delivers another, and a payment, a session, or a metric quietly goes wrong by a factor of 1000 — or 30 years.

These are the five most common varieties.

1. Seconds vs milliseconds (the 1000× incident)

A team I worked with had a payment-window-validation service. The contract said: “the signed timestamp on the request must be within 60 seconds of server time.” The check was implemented as:

if (Math.abs(serverTime - signedTime) > 60) reject();

The signed timestamp came from a JS frontend in milliseconds. serverTime was Go’s time.Now().Unix()seconds. So the difference was always around 1.7 trillion. The check rejected every request.

The fix shipped in 4 minutes — but only after the on-call rolled back the deploy, because the symptom looked like a database problem.

Spotting the bug: Unix seconds are 10 digits today (~1.7 billion). Unix milliseconds are 13 digits (~1.7 trillion). If you ever see a timestamp comparison where one side is ~1000× the other, you’ve found it.

2. The Y2038 problem on signed 32-bit integers

January 19, 2038 at 03:14:07 UTC, signed 32-bit Unix time wraps to negative. Anything storing time as a signed int32 — including a lot of embedded firmware, older filesystems, and the occasional surprise in a cloud service’s metadata field — will start treating “now” as December 13, 1901.

You’d think this is a 2038 problem. It isn’t. It’s already happening for any “expires in 30 years” computation. Issue an SSL cert today with a 2055 expiry, store the expiry as a 32-bit Unix timestamp, and your cert is already broken.

A real incident from 2024: a fleet management system stored vehicle inspection deadlines as 32-bit Unix time. Inspections scheduled for “30 years from now” came up immediately overdue.

Spotting the bug: any time field with a 2038 or 1901 boundary in suspicious places. Audit INT(11) columns in MySQL — they’re 32-bit.

3. The 1582 UUID v1 epoch

UUID v1 timestamps count 100-nanosecond intervals since October 15, 1582. That’s the date the Gregorian calendar was adopted in the Catholic countries (the rest of Europe followed over the next 350 years). Pope Gregory XIII shipped it.

If you ever decode a UUID v1 and see “1582-10-15,” you haven’t gone back in time — you’ve found a v1 with all-zero timestamp bits, which usually means the generator was misconfigured (fixed clock, no entropy source).

UUID v1 timestamp epoch: 1582-10-15T00:00:00Z
UUID v7 timestamp epoch: 1970-01-01T00:00:00Z (unix epoch, milliseconds)

Spotting the bug: see the decoder — paste a UUID v1 and check the extracted time. If it’s pre-1970 or post-3000, the timestamp bits are junk.

4. ISO 8601 without a timezone

2026-04-28T12:00:00

Is this UTC? Local time? “Naive”? The spec says it’s locally interpreted. JavaScript’s new Date("2026-04-28T12:00:00") does one thing in Chrome and another in Safari (used to, anyway). Python’s datetime.fromisoformat returns a naive datetime that crashes any timezone-aware comparison.

Spotting the bug: ISO timestamps without a Z or ±HH:MM suffix. Treat them as suspect. The fix is to always emit timezone-suffixed timestamps (2026-04-28T12:00:00Z or 2026-04-28T12:00:00-07:00) and reject incoming ones that lack it.

This is a common backend bug: a microservice deserialises an ISO timestamp into a naive Python datetime, then passes it to a client that interprets it as local. A 7-hour offset on a payment-due date is enough to charge the wrong day.

5. Daylight Saving Time gaps and overlaps

In US timezones, the hour of 2:00–3:00 AM happens twice in November and zero times in March. Code that loops over hours and assumes “every hour is 60 minutes” breaks once a year.

# Buggy: loops infinitely on the November fall-back day, because adding
# an hour to 1:30 AM EDT yields 1:30 AM EST — also "earlier" by clock.
t = start_of_day_in_ny()
while t < end_of_day:
    process(t)
    t = t + timedelta(hours=1)

The fix is to loop in UTC and convert at the boundary:

t_utc = start_of_day.astimezone(UTC)
end_utc = end_of_day.astimezone(UTC)
while t_utc < end_utc:
    process(t_utc.astimezone(NY))
    t_utc = t_utc + timedelta(hours=1)

Spotting the bug: any loop that increments local-time datetimes. Do the loop in UTC.

Quick rules

  1. Always store and compute in UTC. Convert only at display.
  2. Pick one resolution and stick with it. Microservice contracts should specify “Unix milliseconds” or “Unix seconds” in writing.
  3. Use 64-bit integers for any timestamp that might be more than 30 years out. The cost is one extra column type letter.
  4. Reject ambiguous ISO at the API boundary.
  5. Don’t loop over local hours.

Plug any timestamp into the converter — it auto-detects format (seconds vs ms vs µs vs ns vs ISO vs RFC) and shows you what the bytes mean before you ship the bug.