EnumArray - A Practical Mapping Trick Without Paying Hash-Map Tax

Whenever I work with configuration systems, metadata tables, or low-level processing pipelines, I repeatedly run into the same pattern:
I have an enum, and I want a dictionary that maps each enumerator to some piece of data.

The naïve solution is universal and convenient:

1
2
3
4
5
6
std::unordered_map<Unit, const char*> unitNames {
    { Unit::Grams, "g" },
    { Unit::Meters, "m" },
    { Unit::Liters, "l" },
    { Unit::Items, "pcs" },
};

It works. It is readable. It is also wasteful.

For tiny lookup tables (and most metadata enums are tiny), a hash map is a poor fit:

  • unnecessary memory overhead
  • unnecessary hashing
  • unnecessary indirect lookups
  • no locality
  • no compile-time size information
  • and in the worst case, a surprising amount of hidden allocations

It is overkill for something that is basically a compile-time list of a few well-known items.

At the same time, an enum is just an integer. And an integer is a perfect array index.
This leads naturally to a simple idea: keep the dictionary interface, but back it with std::array.

What we need is only one missing piece: the size of the array.
The easiest (not the most elegant, but the most robust) method is an extra enumerator:

1
2
3
4
5
6
7
8
enum class Unit {
    Grams,
    Meters,
    Liters,
    Items,

    Count
};

If you’ve seen older C codebases, you’ve seen this pattern. It’s boring, but it works and requires zero metaprogramming magic.

And now the container:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
template<typename Enum, typename T>
class EnumArray {
public:
    EnumArray(std::initializer_list<std::pair<Enum, T>>&& values) {
        for (auto&& [key, val] : values)
            data[std::to_underlying(key)] = val;
    }

    T& operator[](Enum key) {
        return data[std::to_underlying(key)];
    }

    const T& operator[](Enum key) const {
        return data[std::to_underlying(key)];
    }

private:
    static constexpr size_t N = std::to_underlying(Enum::Count);
    std::array<T, N> data{};
};

Usage looks identical to a map:

1
2
3
4
5
6
7
8
EnumArray<Unit, const char*> unitNames {
    { Unit::Grams, "g" },
    { Unit::Meters, "m" },
    { Unit::Liters, "l" },
    { Unit::Items, "pcs" },
};

std::cout << unitNames[Unit::Items] << "\n";  // pcs

Efficient, minimal, predictable.

Is this works well?

Performance and locality

Data is compact and cache-friendly.
No pointer chasing, no buckets, no load factors.

Compile-time size

The array size is known at compile time, which means:

  • no hidden allocations
  • no need for reserve()
  • no dynamic resizing
  • possibility of making almost everything constexpr

Predictable semantics

Lookups are O(1) with no heuristics, no collisions, no need for custom hash functions.

If you are dealing with small fixed dictionaries (units, message types, UI states, processing steps, etc.), this is simply the most cost-effective approach.

Real-World Scenarios

Image processing filters

A pipeline might define:

1
2
3
4
5
6
7
enum class FilterId {
    Gaussian,
    Median,
    Sobel,
    Sharpen,
    Count
};

Mapping enum → filter parameters:

1
2
3
4
5
6
EnumArray<FilterId, FilterConfig> filters {
    { FilterId::Gaussian, {3, 1.0f} },
    { FilterId::Median,   {5} },
    { FilterId::Sobel,    {1} },
    { FilterId::Sharpen,  {2} }
};

Lookup is constant-time and extremely cheap. No heap usage during processing.

Logging categories

1
2
3
4
5
6
7
enum class LogChannel {
    Network,
    Storage,
    Rendering,
    Physics,
    Count
};

Enum -> log-level:

1
2
3
4
5
6
EnumArray<LogChannel, Level> logLevels {
    { LogChannel::Network,  Level::Info },
    { LogChannel::Storage,  Level::Warning },
    { LogChannel::Rendering,Level::Debug },
    { LogChannel::Physics,  Level::Error },
};

Simulation flags or constants

1
2
3
4
5
6
7
enum class Material {
    Steel,
    Concrete,
    Glass,
    Wood,
    Count
};

Enum -> constants:

1
2
3
4
5
6
EnumArray<Material, float> density {
    { Material::Steel,     7850.f },
    { Material::Concrete,  2400.f },
    { Material::Glass,     2500.f },
    { Material::Wood,       600.f },
};

Edge Cases and Limits

This technique is simple, but not universal. Some restrictions matter.

The enum must be contiguous and zero-based

This breaks:

1
2
3
4
5
6
enum class Type {
    A = 10,
    B = 11,
    C = 512,
    Count
};

Your array would need 512 entries, most unused.
This is unacceptable in real systems.

Default construction is mandatory

Because the underlying array is always allocated at full size.

If T is non-default-constructible:

1
2
3
struct NonDefault {
    NonDefault(int);
};

Then EnumArray<Enum, NonDefault> fails.

Partial initialization yields default-constructed gaps

If you do this:

1
2
3
4
EnumArray<Unit, const char*> names {
    { Unit::Grams, "g" },
    { Unit::Liters, "l" }
};

Then:

1
names[Unit::Meters] == nullptr   // default value

Whether that’s acceptable depends on your application.

Duplicate initializers silently overwrite

1
2
3
4
EnumArray<Unit, int> values {
    { Unit::Grams, 1 },
    { Unit::Grams, 20 },   // overwrites silently
};

Hash maps also behave this way, but with an array it’s easier to make a mistake.

Out-of-range enum values = instant UB

If someone does:

1
2
Unit x = static_cast<Unit>(1000);
names[x];

You index outside of the array.
No protection unless you add explicit checks.


A Few Additional Useful Variants

Engine codebases often extend this idea:

Optional storage

Avoid default construction by storing std::optional<T>:

1
std::array<std::optional<T>, N> data;

Static compile-time initialization

1
2
3
4
5
6
constexpr EnumArray<Unit, std::string_view> names = {
    { Unit::Grams,  "g" },
    { Unit::Meters, "m" },
    { Unit::Liters, "l" },
    { Unit::Items,  "pcs" },
};

Everything resolved at compile time.

Bounds-checked operator()

1
2
3
4
5
T& at(Enum key) {
    const auto idx = std::to_underlying(key);
    if (idx >= N) throw std::out_of_range("EnumArray");
    return data[idx];
}

Used in high-integrity code where correctness beats speed.

When To Use EnumArray vs Map?

Use EnumArray when:

  • the enum is small and fixed
  • lookups are extremely frequent
  • memory locality matters
  • constexpr initialization is desirable
  • no dynamic insertions are needed
  • default-constructibility is acceptable

Use unordered_map/map when:

  • enum values are non-contiguous
  • you need insertion/removal at runtime
  • memory overhead doesn’t matter
  • the dictionary is sparse
  • the enum is controlled by another subsystem and may change in unpredictable ways

Final Thoughts

EnumArray is one of those micro-optimizations that isn’t exciting but pays off in real systems. You remove the noise of hashing, reduce allocations, improve locality, and make your code more predictable. It is not universal, but for the 90% case where an enum is a closed set of states known at compile time, it delivers exactly what you need:

  • dictionary-like syntax
  • array-level performance
  • no extra costs

I use this pattern often, and I rarely regret it.

If you are looking for a small, practical tool that helps you enforce structure and improve efficiency without introducing complexity - this is one of the simplest wins you can get in modern C++.