Contacts

The automatic lock mode is invalid in this transaction. Configuration translation to managed locks. VIII. Calculation of the quantity and amount for write-off

Today we will talk about locks both at level 1C 8.3 and 8.2 and at the DBMS level. Data blocking is a mandatory element of any system, the number of users in which more than one.

Below I will cut, how the lock works, and which types they happen.

Lock is information that the system resource is captured by another user. There is an opinion that blocking is a mistake. No, blocking is an inevitable measure in a multiplayer system for the separation of resources.

Damage to the system can only bring excess ("extra") blockages, these are those blocking that block unnecessary information. Such blockings must be learned to eliminate, they can lead to non-optimal work Systems.

Locks in 1C are divided into object and transaction.

Objects are, in turn, optimistic and pessimistic. And the transactional can be divided into managed and automatic.

Object locks 1C.

This type of blocking is fully implemented at the 1C platform level and does not affect the DBMS.

Get 267 video tutorials for 1C for free:

Pessimistic blocking

This blocking is triggered when one user has changed something in the form of a directory, and the second is trying to change the object in the form.

Optimistic blocking

This blocking compares the versions of the object: if two users have opened the form, and one of them has changed and recorded the object, then the second when recording the system will give an error that the versions of objects differ.

Transactional locks 1C.

The mechanism of transactional locks 1C is much more interesting and more functional than the mechanism of object locks. This mechanism actively involves blocking at the DBMS level.

Invalid operation of transactional locks may result in the following issues:

  • the problem of lost change;
  • the problem of dirty reading;
  • non-refundability of reading;
  • reading phantoms.

These problems were considered in detail in the article about.

Automatic transactional locks 1C and DBMS

IN automatic mode Works for blocking entirely and fully meets the DBMS. The developer in this case is absolutely not involved in the process. This facilitates the work of the programmer 1C, however information system For a large number of users on automatic locks, it is undesirable (especially for POSTGRESQL DBMS, Oracle BD - when modifying data, they fully block the table).

For different DBMS, different degrees of insulation are used in automatic mode:

  • Serializable to the entire table - 1C file mode, Oracle;
  • Serializable on recording - MS SQL, IBM DB2 when working with non-object entities;
  • REPEATABLE READ On the record - MS SQL, IBM DB2 when working with object entities.

Managed transactional locks 1C and DBMS

All responsibility takes the developer of an applied solution at the level of 1C. In this case, the DBMS establishes enough high level Insulation for transactions - Read Commited (Serializable for File DBMS).

When performing any operation with the database, the 1C Lock Manager analyzes the ability to block the resource (capture). The blocking of the same user is always compatible.

Two blockages are not compatible if: installed by different users, have incompatible (exceptional / shared) and installed on the same resource.

Physical implementation of locks in DBMS

Physically blocks are a table that is in the database called Master. The blocking table itself carries the name syslockinfo.

The table conventionally has four fields:

  1. ID of blocking session Spid;
  2. what exactly is blocked by RES ID;
  3. lock type - S, U. or X. Mode. (in fact, in MS SQL there are 22 types, but only three are used in ligaments with 1C);
  4. lock state - can take value GRANT.(installed) and Wait.(Waiting for its turn).

The "1C: Enterprise" system allows you to use two working database modes: automatic locking mode in the transaction and controlled locks in the transaction.

The fundamental difference of these modes is as follows. The automatic locking mode does not require the developer of any actions to control the locks in the transaction in order. These rules are provided by the 1C: Enterprise System platform by using certain transaction isolation levels in a given DBMS. Such a mode of operation is the most simple for the developer, however, in some cases (for example, with intensive simultaneous operation of a large number of users), the input level of transaction isolation in the DBMS cannot provide sufficient parallelism of work, which manifests itself in the form of a large number of blocking conflicts when users are working.

When working in controlled locks, the "1C: Enterprise" system uses a much lower level of transaction isolation in the DBMS, which makes it possible to significantly increase the parallelism of the application of the applied solution. However, unlike the automatic locking mode, this transaction isolation level can no longer be able to fulfill all the rules for working with data in the transaction. Therefore, when working in manageable mode, the developer is required to independently control the locks installed in the transaction.

In a summary of the difference during operation in the automatic lock mode and in the controlled lock mode, see the following table:

Lock type Insulation level of transactions
Automatic blocking
File database Tables Serializable
MS SQL Server Record
IBM DB2. Record REPETABLE READ or SERIALIZABLE
PostgreSQL Tables Serializable
Oracle Database. Tables Serializable
Controlled blocking
File database Tables Serializable
MS SQL Server Record Read Commited.
IBM DB2. Record Read Commited.
PostgreSQL Record Read Commited.
Oracle Database. Record Read Commited.

Setting the lock mode in the configuration
The configuration has a property. Each configuration object also has a property. Data block management mode.
The data blocking mode for the entire configuration as a whole can be set to automatic values, managed (installed by default for new configuration) I. Automatic and manageable. The values \u200b\u200bare automatic and managed mean that the corresponding lock mode will be used for all configuration objects, regardless of the values \u200b\u200bset for each of the objects. Value Automatic and manageable means that the mode that is specified in its property will be used for a specific configuration object. Data block management mode: Automatic or managed.
It should be noted that the data blocking mode specified for the metadata object is set for transactions that are initiated by the "1C: Enterprise" system when working with the data of this object (for example, when modifying the object data).
If, for example, the object record operation is performed in a transaction initiated by the developer (method Start voltage ()) The data lock control mode will be determined by the parameter value. Lock modemethod Start voltage (), not the value of the properties of the metadata object Data block management mode.
By default, the parameter Lock mode has the meaning Regimening locks. Automatic, so for
In order to use controlled locks in an explicit transaction, you should specify the value of this parameter.
Rocked locks (Set this parameter makes sense iffor the configuration property "Data block management mode" is selected "Automatic and manageable") .

Work with controlled locks in the built-in language
Embedded object is designed to control the locks in the transaction BlockingData. An instance of this object can be created using a constructor and allows you to describe the necessary locked spaces and blocking modes. To install all created locks, the method is used to block () object BlockingData. If this method is performed in the transaction (explicit or implicit), the lock is installed and the end of the transaction will be removed automatically. If the method is blocked () is performed outside the transaction, the lock will not be installed.

The conditions are set to the equality of the field value of the specified value or to enter the field value to the specified range.
Conditions can be set in two ways:

● Using the explicit specification of the field name and value (method Set value() Object Element blocks);
● By specifying the data source containing the necessary values \u200b\u200b(the property of the source object Element blocks).

For each blocking element, one of two lock modes can be specified:

● shared
● Exceptional.

The compatibility table of controlled locks is as follows.

The separable blocking mode implies that blocked data cannot be changed by another transaction until the current transaction is completed.
The exceptional lock mode implies that blocked data cannot be changed by another transaction until the end of the current transaction, and cannot be read by another transaction that sets the separated blocking to this data.

Features of work in "Automatic and manageable" mode

When working in lock management mode, two features should be taken into account automatic and managed:

● Regardless of the mode specified for this transaction, the system will install the appropriate managed
Lock.
● The lock control mode is determined by the transaction of the "upper" level itself. In other words, if another transaction started by the start of the transaction, the starting transaction can be executed only in the mode that is installed for the transaction already running.

Consider the listed features in more detail.
First feature It is that even if the transaction is used automatic lock control mode, the system will install additionally and the corresponding controlled locks when writing data in this transaction. From this it follows that transactions executed in controlled locks can to confront with transactions,
Performed in automatic lock control mode.
Second feature It is that the lock management mode is indicated for the metadata object in the configuration or specified when the transaction is explicitly specified (as the method parameter Start voltage ()) is only the "desired" regime. The actual lock control mode in which the transaction will be executed depends on whether this challenge of the start of the transaction is first, or by this time another transaction has already begun in this session of the "1C: Enterprise" system.
For example, if you want to control locks when recording register entries, when conducting a document, then managed mode The locks must be set both for the register itself and for the document, since the recording of register entries will be recorded in the transaction open when recording the document.

Accelerate 1C by pressing multiple buttons 2. Controlled locks. September 4th, 2011

If you read the configuration translation methodology for controlled locks from 1C - you can find a lot of interesting and frightening. In fact, everything is simple: In the configuration properties, change the data blocking mode is "manageable". Everything. I can congratulate you - you just switched to managed locks. In fact, everything is somewhat more complicated - but not much.

For a start, a small theoretical excursion - why do you need locks: who has access, of course, you can read here: http://kb.1c.ru/articleview.jsp?id\u003d30 1c got preoccupied to write a sufficiently available article about blocking data. Who does not have access in a nutshell, I will describe what blocks are needed:

Example 1. If, after turning on the controlled locks, nothing to do, and at the same time starting 2 documents in parallel (one of them is still a fraction of a second earlier), we will get approximately the following picture:

Transaction 1. Transaction 2. State of residues
Start | 1 PC
| Start 1 PC
| | 1 PC
Reading residue | 1 PC
| Reading residue 1 PC
| | 1 PC
Write-off from residues | 0 pieces
| Writing off balance -1 PC
Completion |
Completion

What's wrong here? Control residues gave failure. The 2nd document managed to read the remnants earlier than the 1st managed to write them down. At the same time I saw that on the residues of 1 thing and calmly chosen them after the first. It is worth notify that on the fact of blocking here will still be. 2 documents will not be able to write off the remnants at the same time, it is necessary for the logical integrity of the database, but to solve the applied task in this example It is unlikely helpful.

Now we will try to correct the situation - in the process of conducting a document, the installation of the exclusive controlled blocking immediately before reading the residues:

Well, now, when we figured out the lock, you need only to install controlled locks where it is necessary: \u200b\u200bnamely - only where residue control is performed. If you have a manager in the database has the right to conduct a document, regardless of whether there is a product (money) on the balances or not, why do you need blocking? You can simply not install them, or register and comment to better times. If you are controlled by residues, as a rule, it is 3-4 registers, well, a maximum of 10-ok. Control can be suspended both in general procedures and functions and in the modules for setting pH. The code is extremely simple, open the syntax of the assistant - we look at:

Lock \u003d New block edges;
Element block \u003d blocking. Add ( "Registerworking. Townsnasclands") ;
Element block. Establishment (Quality ", References. Quality. Found contact (" 1 "));
Element block. Mode \u003d CartoBlocks. Exceptional;
Element block. Sourcing \u003d Document object. Return on;
Element block. Use the studios ("nomenclature", "nomenclature");
Element block. Use the factory ("warehouse", "warehouse");
Lock. Block ();

Actually, everything is immediately clear - block "goods on the warehouse", 1 measurement becomes explicitly, the values \u200b\u200bof 2 others take from the data source - PM document.

Those who read the books on 8.2 probably remember about the "new logic of holding" - when the residue control is made after recording the movements of the document. Distributed the question why is it? But the same name plate redraw so that the remnants and blocking will be after recording movements:

Transaction 1. Transaction 2. State of residues
Start | 1 PC
| Start 1 PC
| | 1 PC
Write-off from residues | 0 pieces
| Write-off from residues -1 PC
Lock | -1 PC
Reading residue Attempt to block -1 PC
| Waiting for blocking -1 PC
| Waiting for blocking -1 PC
Completion Waiting for blocking -1 PC
Lock -1 PC
Reading residue -1 PC
| -1 PC
Renouncement 0 pieces

The difference with the appearance is not significant - the performance gains get due to the fact that during the write-off of the residues (recording them into the database, which actually takes time) there is no blocking yet. Locking occurs later by the end of the transaction, where negative residues were made, the business logic of the application is quite satisfying.

Knowing for what blocking you can really manage based on the business tasks that you decide. The DBMS is developed on the basis of the assumption of maxissal data protection. In case you, for example, carry out bank transactions blocking should be everywhere and at the maximum level. It is better to block extra records than to allow the inconsistency of the data.

In case, if you sell buns or ballpoint handles, you hardly need so many locks. You lose hundreds of times more by marriage and reorganization on human fault, which could be in the case of the two users of the two identical shipment earnings.

For variation between such different tasks In the DBMS invented the levels of isolation. By installing the transaction isolation level, you can say the DBMS which blockages to apply in different cases (when recording and when reading in the transaction) in different cases are superimposed (you can not write) or X (it is impossible to write or write) blocking.

So in automatic mode you almost always have a serializable isolation level that will impose x locks where you need and where it is not necessary that it will significantly spoil your life

And in the manageable you will have Read Commited, which will impose and immediately remove s lock when reading, and x only when recording. The most cunning level. Quickly imposed S blocking just allows you to check whether the lock is not superimposed by this data, which ensures that only consistent data is read, as is customary for this level of isolation, and in case you have read and fulfilled the reign in the previous article, There will be even s lock when reading, so only the recording will be blocked at the DBMS level during recording - which is correct and necessary for the gravity of the data.

How do you do with controlled locks - only your solution. But I would reconsently not hurry to install them. I met the companies in which there was an automatic lock mode, while the word "stuck blocking" sounded even from the dealership of the general director, and at the same time the control of negative residues was turned off ....

With multiplayer mode of operation in 1C data blocking are the necessary mechanism. It is they protect against situations similar to the simultaneous sale of two managers of the same product to various customers. The 1C platform provides two types of locks - managed and automatic. The first of the locking modes in 1C is optimal for high-loaded systems with large quantity users. Consider it in more detail.

Features of the controlled locker mode

Unlike automatic, managed mode allows the 1C system to use its own lock manager and apply less rigid DBMS rules. That is, the built-in mechanism allows you to take into account the business logic of the application and more smoothly and accurately establishes restrictions on reading and writing data. Changing the lock mode can give a significant performance gain and will reduce the number of transaction block errors. This is due to the additional check by the lock manager for compliance with the restrictions installed within the system before passing the DBMS request.

A significant minus lies in the fact that the developer has to independently control the consistency of the data when they are inserted and processing. It is likely that after switching on the controlled lock mode, you will have to write a lot of checks to achieve the previous security level. Despite this, many companies prefer to switch to managed mode if the capabilities allow it.

When developing software checks and restrictions, it is important to remember about the feature of controlled locks - any of them holds to the end of the transaction. From this it follows that programmers need to be installed closer to the end of the transaction so that the probability of waiting is minimal. If you need to make calculations and write their result, then the imposition of the blocking is more correct to register after the calculations.

Another common problem of blocking 1C is the import of documents. Many developers use a fairly simple solution - when loading does not hold documents, but only to create. And after, with the help of a simple mechanism, carry out all the loaded data in multi-threaded mode by key characteristics - nomenclature, partners or warehouses.

The transition algorithm to the controlled 1c lock looks simply, but the unqualified 1C administrator can make errors, fix that will be difficult. Most often there are problems with redundant or insufficient blocking levels. In the first case, problems will arise with the speed of the system, up to the emergency stops of the server cluster. Insufficient locks are dangerous in accounting errors while simultaneously work users.

Switch to managed mode

Despite the fact that below will be presented the full algorithm for transition to controlled locks, it should be performed by an experienced specialist. If you do not understand the principles of operation of the locking mechanism in 1C and DBMS, it is unlikely that it is unlikely to correctly write restrictions. But it is to relate to complex configurations. For simple configurations Beginners developers can successfully complete the mode of switching the mode and gain experience:

  • First you need to change the data blocking control mode for the configuration. To do this, open the configuration tree in the configurator and in the properties of the root element in the Compatibility section change the mode. Select "Automatic and manageable" so that no errors occurred before all objects are translated into a new mode;
  • Now the queue of documents will come. After all, it is with their help, we register all the events that need to be controlled. Start transfer to 1C controlled locks are needed from the most downloaded documents. On the Other tab, specify the "manageable" lock mode;
  • We find all the registers related to the already processed document and translate them into a managing mode according to the similar documents method;
  • The next step includes a search and changing all transactions with changed objects. This includes explicit changes, including keywords "Starting voltage ()" and all documents and registers, including transactions;
Starting voltage () for each documenting output from the listing of the Object Cycle \u003d Documentation. Perepacial object (); Attempt to object bout. Installation of computers (truth); Elimination failure \u003d truth; Canceling (); Report ("Failed to delete a document" + object); Interrupt; Celebrations; EndCycle; Fix the transaction ();
  • To exclude the operator of the query language "to change". You can replace it with the object "Lockdata" with the need to change the request and the algorithm for its call and processing.

The last two stages are the most complex and qualifications from the developer, but they are guarantors to maintain a working state of accounting in the system.

The main reasons for the transition to managed locks:

  • The main reason is 1C Recommendation: Expert on the basis of readings or 1C: PC
  • Problems with parallel operation of users ()
  • Use Oracle, PostgreSQL and.

Cost of work:

Essence of controlled blocking

When working in automatic lock control mode 1C: The company sets a high degree of data isolation in transaction at the DBMS level. This makes it possible to completely eliminate the possibility of obtaining non-holistic or incorrect data without any special efforts by applied developers.

This is a convenient and correct approach when small quantity active users. The price of ease of development is a certain amount of redundant blocking at the DBMS level. These locks are associated with both the features of the implementation of locking mechanisms in the DBMS itself and so that the DBMS cannot take into account (and does not take into account) the physical meaning and structure of metadata objects 1C: enterprises.

When working with high competition for resources ( a large number of Users) At some point, the effect of blocking reduction becomes noticeable in terms of performance with parallel mode.

After the configuration is converted to the controlled mode, the optional "lock manager" and the control over the integrity of the data is activated now on the side of the DBMS, but on the side of the server 1c. This increases the load on the 1C server's iron (needed faster processors and more memory), and actually makes even a small slowdown (several percent), but much more significant improves the situation with locks (less blocks due to blockages to the object, and not to a combination of tables, less Block area and in some cases less lifetime lifetime lifting, i.e. not until the end of the transaction). Due to this, the overall parallelism improves.


New 1C configurations are implemented immediately in manageable mode.

  • Question: Is it possible to first make an audit, and then transfer to UB?

Answer: You can, the audit will serve as an additional substantiation of the feasibility of translation into controlled locks and also assess the contribution of automatic locks to the overall slowing down and whether additional efforts are needed other than translation.

  • Question: To transfer to UB, what exactly to provide access - RDP, TeamViewer? Or can you send a file configuration?

Answer: We try not to limit one specific technology. remote access, suitable any technology of remote access. If it does not matter for you, then practical RDP.
We can perform optimization by the sent configuration file, but then we will not be able to debug some real data and you will have to be attentive to test. If we perform optimization on the copy of the base, then we can carefully test before we give you the result of the work.

  • Question: We have the 10 regular programmers who change something in the conference every day. Used General Configuration Storage ". How will interaction be organized when transferring to UB? Or all programmers need to be sent on vacation?

Answer: As a rule, our changes are made within a couple of days. The rest of the time is to test made changes, among other things, from the point of view of the required logic of the defined business and not technical considerations. we We can make changes to separate file. Configuration CF, and then your programmer will enable in the repository. On vacation, anyone can not be sent. In other interaction options, you just need to agree on which objects are planning to capture your developers so that we build a plan of work, convenient to both parties. Typically, the entire configuration capture your developers is not required, or give us a "steering wheel" for us.



Did you like the article? Share it