A desirable concurrency semantics to provide for programs is region serializability. This strong semantics guarantees that all program regions between synchronization operations appear to execute in some global and serial order consistent with program order. Unfortunately, this guarantee is currently provided only to programs that are free of data races. For programs with data races, system designers currently face a difficult trade-off between losing all semantic guarantees and hurting performance. In this paper, we argue that region serializability should be guaranteed for all programs, including those with data races. This allows programmers, compilers, and other tools to reason about a program execution as an interleaving of code regions rather than memory instructions. We show one way to provide this guarantee with an execution style called uniparallelism and simple compiler support. The cost of the system is a 100% increase in utilization. However, if there are sufficient spare cores on the computer, the system adds a median overhead of only 15% for 4 threads.