We are independent & ad-supported. We may earn a commission for purchases made through our links.
Advertiser Disclosure
Our website is an independent, advertising-supported platform. We provide our content free of charge to our readers, and to keep it that way, we rely on revenue generated through advertisements and affiliate partnerships. This means that when you click on certain links on our site and make a purchase, we may earn a commission. Learn more.
How We Make Money
We sustain our operations through affiliate commissions and advertising. If you click on an affiliate link and make a purchase, we may receive a commission from the merchant at no additional cost to you. We also display advertisements on our website, which help generate revenue to support our work and keep our content free for readers. Our editorial team operates independently of our advertising and affiliate partnerships to ensure that our content remains unbiased and focused on providing you with the best information and recommendations based on thorough research and honest evaluations. To remain transparent, we’ve provided a list of our current affiliate partners here.

Our Promise to you

Founded in 2002, our company has been a trusted resource for readers seeking informative and engaging content. Our dedication to quality remains unwavering—and will never change. We follow a strict editorial policy, ensuring that our content is authored by highly qualified professionals and edited by subject matter experts. This guarantees that everything we publish is objective, accurate, and trustworthy.

Over the years, we've refined our approach to cover a wide range of topics, providing readers with reliable and practical advice to enhance their knowledge and skills. That's why millions of readers turn to us each year. Join us in celebrating the joy of learning, guided by standards you can trust.

What Is Concurrency Control?

By Jean Marie Asta
Updated: May 17, 2024

In data management programming, concurrency control is a mechanism designed to ensure that accurate results are generated by concurrent operations. Those results must also be obtained in a timely manner. Concurrency control is very often seen in databases where there is a cache of searchable information for users to obtain.

Programmers try to design a database in such a way that important transactions’ effect on shared data will be serially equivalent. What this means is that data which makes contact with sets of transactions would be in a certain state where the results are obtainable if all transactions execute serially and in a particular order. Sometimes that data is invalid as a result of it being modified by two transactions, concurrently.

There are multiple ways of ensuring that transactions execute one after another, including the use of mutual exclusion as well as creating a resource that decides which transactions have access. This is overkill, however, and will not allow a programmer to benefit from concurrency control in a distributed system. Concurrency control allows the simultaneous execution of multiple transactions while keeping these transactions away from each other, ensuring linearizability. One way to implement concurrency control is the use of an exclusive lock on a particular resource for serial transaction executions which share resources. Transactions will lock an object intended to be used, and if some other transaction makes a request for the object that is locked, that transaction has to wait for the object to unlock.

Implementation of this method in distributed systems involves lock managers — servers that issue resource locks. This is very similar to servers for centralized mutual exclusions, where clients may request locks and send messages for the release of locks on a particular resource. Preservation of serial execution, however, is still necessary for concurrency control. If two separate transactions access a similar object set, the results need to be similar and as if these transactions were executed in a particular order. To ensure order on access to a resource, two-phase locking is introduced, meaning that transactions are not allowed new locks upon the release of a separate lock.

In two-phase locking for concurrency control, its initial phase is deemed the growing phase, where the transaction acquires its needed lock. The next phase is deemed a shrinking phase, in which the transaction has its locks released. There are problems with this type of locking. If transactions abort, other transactions might use data from objects modified and unlocked by aborted transactions. This would result in other transactions being aborted.

WiseGeek is dedicated to providing accurate and trustworthy information. We carefully select reputable sources and employ a rigorous fact-checking process to maintain the highest standards. To learn more about our commitment to accuracy, read our editorial process.
Discussion Comments
WiseGeek, in your inbox

Our latest articles, guides, and more, delivered daily.

WiseGeek, in your inbox

Our latest articles, guides, and more, delivered daily.