What is Unix epoch time?
On this page
Unix epoch time (also “Unix time” or just “epoch time”) is how most computer systems count time. It’s a single integer: the number of seconds since 1970-01-01 00:00:00 UTC.
The integer is timezone-independent and easy to manipulate. Subtracting two timestamps gives you the duration between them in seconds. Comparing them tells you which came first. Storing them is just storing an integer.
A complete example
Date: 2025-01-01 00:00:00 UTC
Unix seconds: 1735689600
Unix milliseconds: 1735689600000
ISO 8601: 2025-01-01T00:00:00Z
Day of week: Wednesday
Try it: paste any of these into the converter and watch them roundtrip.
Why this design
Three properties make Unix time the lingua franca of computer time:
- It’s a single integer. Easy to store, compare, subtract, serialize. Languages don’t disagree about its representation.
- It’s timezone-independent. A given Unix timestamp refers to the same instant in time everywhere on Earth. Local time interpretation happens at display only.
- The epoch is fixed and arbitrary. 1970-01-01 was chosen because Unix was being designed in 1969 and the team needed a reference point. The exact date doesn’t matter; what matters is that everyone agrees on it.
What it doesn’t track
Unix time is just a count of seconds. It doesn’t know about:
- Calendar concepts — months, weeks, days. You compute these on top of the integer.
- Timezones — Unix time is UTC by definition. Local time is a display concern.
- Daylight saving time — same as above. The Unix integer doesn’t jump when DST starts; only the local display does.
- Leap seconds — Unix time deliberately ignores them. When a leap second is inserted (which has happened 27 times since 1972), Unix time pretends nothing happened. Some systems freeze the second; others smear it across a longer period.
- Sub-second precision — Unix seconds are integers. For finer precision, systems use milliseconds, microseconds, or nanoseconds.
Common variations
Different languages and APIs use different time units:
| Unit | Range covered (signed 32-bit) | Range covered (signed 64-bit) | Where you’ll see it |
|---|---|---|---|
| Seconds | 1901–2038 | -∞ to year 292 billion | C time_t, Python time.time(), Go Unix(), PostgreSQL EXTRACT(EPOCH ...), most APIs |
| Milliseconds | 1969-09 to 1970-04 | 292 million years span | JavaScript Date.now(), Java System.currentTimeMillis() |
| Microseconds | tiny | 292K years | Postgres clock_timestamp(), time-series databases |
| Nanoseconds | seconds-scale | ~292 years | Go time.UnixNano(), modern Linux file timestamps |
The 32-bit signed range for seconds is what causes the Year 2038 problem — January 19, 2038 is when 32-bit seconds overflow.
The seconds-vs-milliseconds bug
The single most common bug in time code: passing a number meant for one unit to a function expecting another. A 1000× error gives you a date 30+ years off.
// Bug: passing seconds to JavaScript's millisecond-expecting Date
new Date(1735689600);
// → 1970-01-21T03:08:09.600Z ← wrong; this is treated as ms, gives 1970
// Fix: scale to ms
new Date(1735689600 * 1000);
// → 2025-01-01T00:00:00.000Z ← right
Where it comes up
- Logs. Most modern log aggregators store Unix timestamps. ISO strings are friendlier in human-readable logs but heavier on disk.
- Database TIMESTAMP columns. PostgreSQL
timestamp with time zonestores UTC under the hood, regardless of how it’s displayed. - Cookies and auth tokens. JWT
exp,iat, andnbfclaims are Unix seconds. - Cache headers. HTTP
Expiresis a Unix timestamp formatted as RFC 2822. - File system timestamps. Linux’s
statshows seconds (legacy) or nanoseconds (modern). - Distributed systems. When you need a global ordering of events, Unix timestamps are the cheapest first attempt (with caveats — see hybrid logical clocks for the right answer at scale).
Common pitfalls
1. Forgetting timezone
new Date("2025-01-01");
// In some implementations: 2025-01-01T00:00:00Z (UTC)
// In others: midnight in your local timezone
Always be explicit:
new Date("2025-01-01T00:00:00Z"); // unambiguously UTC
new Date("2025-01-01T00:00:00-08:00"); // explicitly Pacific
2. Not normalizing on storage
If you store local times, daylight saving will eventually bite you. Always store UTC; convert to local at display time only.
3. Comparing Date objects with ==
new Date(1735689600000) == new Date(1735689600000) // false (object identity)
new Date(1735689600000).getTime() === new Date(1735689600000).getTime() // true
4. Reading “epoch” as anything but UTC
The Unix epoch is always 1970-01-01 00:00:00 UTC. Some systems have local-epoch quirks (Excel uses 1900; Mac OS used 1904; Windows COM uses 1601), but those aren’t “Unix time” — they’re proprietary formats with similar shapes.
Try the tools
- Live now-clock — current Unix timestamp ticking every second
- Converter — paste any timestamp or date, see all formats
- Seconds vs milliseconds — when each unit is right
- Year 2038 problem — the next big timestamp rollover