Skip to content
100% in your browser. Nothing you paste is uploaded — all processing runs locally. Read more →

What is Unix epoch time?

On this page
  1. A complete example
  2. Why this design
  3. What it doesn’t track
  4. Common variations
  5. The seconds-vs-milliseconds bug
  6. Where it comes up
  7. Common pitfalls
    1. 1. Forgetting timezone
    2. 2. Not normalizing on storage
    3. 3. Comparing Date objects with ==
    4. 4. Reading “epoch” as anything but UTC
  8. Try the tools

Unix epoch time (also “Unix time” or just “epoch time”) is how most computer systems count time. It’s a single integer: the number of seconds since 1970-01-01 00:00:00 UTC.

The integer is timezone-independent and easy to manipulate. Subtracting two timestamps gives you the duration between them in seconds. Comparing them tells you which came first. Storing them is just storing an integer.

A complete example

Date:                2025-01-01 00:00:00 UTC
Unix seconds:        1735689600
Unix milliseconds:   1735689600000
ISO 8601:            2025-01-01T00:00:00Z
Day of week:         Wednesday

Try it: paste any of these into the converter and watch them roundtrip.

Why this design

Three properties make Unix time the lingua franca of computer time:

  1. It’s a single integer. Easy to store, compare, subtract, serialize. Languages don’t disagree about its representation.
  2. It’s timezone-independent. A given Unix timestamp refers to the same instant in time everywhere on Earth. Local time interpretation happens at display only.
  3. The epoch is fixed and arbitrary. 1970-01-01 was chosen because Unix was being designed in 1969 and the team needed a reference point. The exact date doesn’t matter; what matters is that everyone agrees on it.

What it doesn’t track

Unix time is just a count of seconds. It doesn’t know about:

Common variations

Different languages and APIs use different time units:

UnitRange covered (signed 32-bit)Range covered (signed 64-bit)Where you’ll see it
Seconds1901–2038-∞ to year 292 billionC time_t, Python time.time(), Go Unix(), PostgreSQL EXTRACT(EPOCH ...), most APIs
Milliseconds1969-09 to 1970-04292 million years spanJavaScript Date.now(), Java System.currentTimeMillis()
Microsecondstiny292K yearsPostgres clock_timestamp(), time-series databases
Nanosecondsseconds-scale~292 yearsGo time.UnixNano(), modern Linux file timestamps

The 32-bit signed range for seconds is what causes the Year 2038 problem — January 19, 2038 is when 32-bit seconds overflow.

The seconds-vs-milliseconds bug

The single most common bug in time code: passing a number meant for one unit to a function expecting another. A 1000× error gives you a date 30+ years off.

// Bug: passing seconds to JavaScript's millisecond-expecting Date
new Date(1735689600);
// → 1970-01-21T03:08:09.600Z  ← wrong; this is treated as ms, gives 1970

// Fix: scale to ms
new Date(1735689600 * 1000);
// → 2025-01-01T00:00:00.000Z  ← right

Dedicated guide on this →

Where it comes up

Common pitfalls

1. Forgetting timezone

new Date("2025-01-01");
// In some implementations: 2025-01-01T00:00:00Z (UTC)
// In others: midnight in your local timezone

Always be explicit:

new Date("2025-01-01T00:00:00Z");        // unambiguously UTC
new Date("2025-01-01T00:00:00-08:00");   // explicitly Pacific

2. Not normalizing on storage

If you store local times, daylight saving will eventually bite you. Always store UTC; convert to local at display time only.

3. Comparing Date objects with ==

new Date(1735689600000) == new Date(1735689600000)   // false (object identity)
new Date(1735689600000).getTime() === new Date(1735689600000).getTime()   // true

4. Reading “epoch” as anything but UTC

The Unix epoch is always 1970-01-01 00:00:00 UTC. Some systems have local-epoch quirks (Excel uses 1900; Mac OS used 1904; Windows COM uses 1601), but those aren’t “Unix time” — they’re proprietary formats with similar shapes.

Try the tools