Coaching Skills

Target Audience
This workshop is developed for those who wish to help trainees effectively apply skills learned in training courses at their workplace, using a structured approach to learning transfer.
Managers with direct responsibility for training programs and administrators with indirect responsibility for training may also find this workshop useful.
None. However, Business Edge® Training of Trainers (ToT): Facilitation Skills  and Training Needs Analysis (TNA) courses, or equivalents are excellent complements to this workshop.
Course Objectives
Upon completion of this course, you should be able to:

  • Explain purpose and benefits of coaching
  • Review the action planning context for coaching
  • Describe 5 coaching skills
  • Use effective listening techniques with a coachee
  • Explain the four dimensions of how to build rapport with a coachee
  • Apply the 4P model to resolve problems and issues arising with a coachee
  • Give feedback to a coachee in a constructive manner
  • Explain how to facilitate and support a coachee’s learning
  • Demonstrate techniques of each coaching skill
  • Apply coaching skills to transfer of learning from the classroom to the workplace: Initiating, monitoring and concluding action plans
Classroom-based course
The workshop uses an interactive methodology in order to engage participants actively in the learning process. During the course, your trainer will act as both instructor and facilitator, using a variety of learning methods to help you and your fellow participants share experiences and learn through participation in activities such as group discussions, case studies, role-playing, and games. At the same time, you will be guided to apply skills learned to client examples of your own choosing.
Not available at this time
  • Introduction to the workshop
    • Overview of coaching: What and why
  • Coaching and action planning
    • The business case for training transfer
    • Requirements for successful training transfer
    • Transfer methods
    • The Business Edge® action planner
  • Five coaching competencies
    • 1: Questioning and listening (LACE)
    • 2: Building rapport (RITE)
    • 3: Observing and analyzing (4P’s)
    • 4: Providing feedback (FEED)
    • 5: Facilitating learning (STAIR)
    • Putting it all together
  • Concluding remarks
Course Duration
2 days (depending on number of participants attending)
Posted in Education and Training, Knowledge, Problem solving, Uncategorized | Leave a comment

GLOSS, EASE, FEED and OFF stand for?

The FARMERS School is a ToT platform which offers 16 modular courses on rice-based integrated farming and marketing to boost the training capacities of agricultural extension workers, local farmer technicians, and farmer leaders. Each module was developed by assigned partner agencies as expert contribution to the project. The development of the ToT modules was based on the adult learning framework of the International Finance Corporation (IFC) as part of the private sector’s contribution to the project.
IFC’s adult learning approach builds on the learning cycle of adult learners as crucial aspects in the training to optimize learning. It offers a facilitation guide in conducting training or a meeting based on the four-quadrant learning cycle – GLOSS, EASE, FEED, and OFF. GLOSS stands for Get attention, Link topics to experience or knowledge, learning Outcomes, Structure or outline of the topic, and Stimulate interest; EASE, Explain, Activity, Summary, and Examples; FEED, Frame positive intent, Evidence, Effect, and Diagnosis; and OFF, review Outcomes, Feedback, and Future link.

Posted in Education and Training, Knowledge, Uncategorized | 1 Comment

Principles of adult education

“Education” has a lot of sense, but it is generally accepted as a change in behavior.Writing is not intended to analyze existing theories behind Adult Education, but tounderstand the principles of Adult Education (or commonly abbreviated as PODis acceptableThe principles presented here are essentially the same as that developed insome training using instructional methodsbut the one thing that distinguishes the PODprinciples more widely known.
These principles are related to the training (trainingand educationand is usuallyapplied to the formal classroom situation or to an on the job training (internship). Eachform of training should include as much as possible the nine principles below. So it is easy to remember (9 principles), the commonly used system mnemonics mnemonic orforeign termsthe RAMP 2 FAME.
R = Recency
A = Appropriateness
M = Motivation
P = Primacy
2 = 2 – Way Communication
F = Feedback
A = Active Learning
M = Multi – Sense Learning
E = Excercise
These principles are in many ways very important, because it allows you (the coachto set up a session in a timely and adequatepresenting sessions effectively andefficiently, it also allows you to evaluate for these sessionsLet’s look at the ideasbehind the term RAMP 2 FAMEIt is important to note that these principles are not presented in a single sequenceSame position in the relation between relations.


Law of Recency show us that something is learned or received at the last moment is most remembered by attendees / participantsIt shows two separate pengetian in educationFirst, with regard to the content (materials) at the end of the session and thesecond relates to something “fresh” in the minds of the participantsIn the firstapplication, it is important for coaches to make a summary (summary) as often aspossible and be sure that key messages / core always emphasized again at the end of the sessionIn the second applicationindicated to the coach to make a review of the plan (reviewper section in each presentation.

Factors for consideration of recency

* Try to keep each session, given the relatively short termno more than 20 minutes (if that’s possible).
* If the session more than 20 minutes, must often be summarized (recapitulated).More panjangsebaiknya session divided into sessionssessions are shorter with somepauses so you can create a summary.
* The end of each session is an importantMake a summary / recap of the wholesession and put the emphasis on the messages or key points.

Strive for attendees / participants remain “aware” of where the direction and progress oftheir learning


Law of appropriatenes or suitability tells us that on the whole, be it training, information, tools used aids, case studies case studiesand other materials must be tailored to the needs of participants / participantsParticipants will be easy to lose motivation if the coach fails to strive for material relevant to their needsIn addition, coaches must continually provide opportunities for participants to find out how the relationshipbetween new information with prior knowledge already obtained the participants, so thatwe can eliminate concerns about something that is vague or unknown.
Factors to consider regarding appropriatness:

* Coaches should clearly identify the need for participants to take part in the training.To the needs identifiedthe coach must make sure that sehala something related tosessions in accordance with those requirements.
* Use the description, examples or illustrations that are familiar (familiar) with the participants.


The law of motivation tells us that pastisipan / participants must have a desire to learn,he should be ready to learnand must have a reason to learnCoach found that ifparticipants have a strong motivation to learn or sense of desire to succeedhe will be better than others in the studyFirst of all because the motivation to create theenvironment (atmosphere) learning to be fun. If we fail to use the law of suitability(appropriatenessand neglecting to make relevant materialwe‘ll be sure to miss the motivation of participants.

Factors to consider regarding motivation:

* Materials should be meaningful and valuable to the participants, not only forcoaches
* That should be motivated not only the participants but also the coach itself. For ifthe coach is not motivatedtraining may be of no interest and did not even reach the desired goal.
* As mentioned in the law of conformity (appropriateness), a trainer when necessaryto identify the needs of the participants why it comes to trainingCoaches usually can create motivation by saying that this session can meet the needs of participants.
* Move from side to know to not knowStart the session with things or points that arefamiliar or unfamiliar to the participantsSlowly up and connect the points together so that each know where they are in the process of training.



Law of Primacy tells us that the first things that the participants are usually studied, as well as a first impression or a series of information obtained from the coach really very importantFor this reason, there is a good practice is to include all the key points at the beginning of the sessionDuring the session running, developing key points and other information relatedIt is included in the Primacy of law is the fact that when participants were shown how to do something, they must be shown the correct way in the beginningThe reason for this is that it is sometimes very difficult to “teach” the participants at the time they make a mistake at the beginning of the exercise.

Factors to consider regarding Primacy:

* Once again, try sessionssessions administered in a relatively short period of time.Preferably about 20 minutes as suggested in the law recency.
* The beginning of your session will be crucialAs you know that some manyparticipants will listenand therefore make it interesting and give loads of informationimportant to them.
* Try to keep participants always “aware” of the direction and progress of the study.
* Assure participants will get things right the first time when you ask them to do something.


Law of 2-way-communication or 2-way communication clearly emphasized that the training includes communication with participants, not at them. Various forms of presentation should use the principle of 2-way communication or feedback. This does not have to mean that the entire session must be a discussion, but which allows the interaction between the trainer / facilitator and participant / participants.
Factors for consideration of two-way communication:

* Your body language is also associated with 2-way communication: you must be sure that it does not conflict with what you say.
* Plan your sessions should have interaction with whom it was designed, that is none other than the participants.


Law of feedback or feedback shows us, both facilitators and participants need information from each other. Facilitators need to know that the participants follow and keep paying attention to what was said, and instead participants also need feedback in accordance with the appearance / performance.
Reinforcement also need feedback. If we appreciate the participants (positive reinforcement) to do things right, we have a much greater chance for them to change their behavior as we want. Also wary that too much negative reinforcement will probably keep us obtain the response we expect.
Factors for consideration of the feedback:
* Participants must be tested (tested) on a regular basis for feedback to the facilitator
* At the time participants were tested, they should obtain feedback on their performance as soon as possible.
* The test may also include questions that are given regular facilitator of the group conditions
* All feedback is not necessarily positive, as many people believe. Positive feedback is only half of it and almost useless in the absence of negative feedback
* At the time of the participants did or said is true (eg, answering questions), call or announce it (in front of the group / other participant if that is possible).
* Prepare your presentation so that there is a positive reinforcement that is built up in the early session.
* Please note that participants really give positive feedback (done right) as well as to those who gave negative feedback (make a mistake).


Laws of active learning shows us that the participants studied harder when they are actively involved in the training process. Remember the adage that says “Learning While Working”? This is important in training adults. If you want to instruct the participants to write a report, do not just tell them how it should be made but provide an opportunity for them to do so. Another advantage of this is that adults are generally not used to sitting in a classroom all day, therefore the principle of active learning will help them to not saturate.
Factors for consideration of active learning:
* Use the exercises or practice for providing instruction
* Use a lot of questions for providing instruction
* A quick quiz can be used so that participants remain active
* If possible, let the participants do what is in the instructions
If the participant is allowed to sit for long periods without participating or given to these questions, chances are they will be sleepy / lose attention.


Law of multi-sense learning to say that learning to be much more effective if participants use more than one of the five senses. If you tell trainees about a new type of sandwich they’ll probably remember it. If you let them touch, smell and feel fine, there’s no way for them to forget it.Factors for consideration of multiple-sense learning:
* If you memberitah / say something to the participants, try to show a good
* Use as many senses of the participants if it is necessary as a means of learning them, but do not forget the target to be achieved
* When using a multiple-sense learning, you have to be sure that it is not difficult for the group to mendengarnyaa, see and touch whatever you want.


Law of the exercise indicate that something is repeated the most memorable. By making participants do exercises or repeating the information provided, we can increase their chances of getting able to recall information that has been provided. It is best if the coach practices or repeat the lesson by repeating information in different ways. Maybe the coach can talk about a new process, then shows a diagram / overhead, showing the finished product and finally asked the participants to complete a given task. Exercise is also related to the intensity. The law also refers to the practice of meaningful repetition or learning again.
Factors for consideration in the exercise:

* The more often trainee to repeat something, the more they remember the information provided
* By providing repetitive questions we improve training
* Participants must repeat the exercise itself, but the record does not include
* Summarize often as possible because it is another form of exercise. Make a summary of the current always concluding session
* Create a regular participant always remember what you have so far in the presentation sidajikan
* It is often said that without some form of exercise, the participants will forget 1/4 of what they learn in 6 hours, 1/3 in 24 hours, and about 9% in 6 weeks.


The principles of learning related to training and education. These principles are used in all sectors / areas, either in a classroom or apprenticeship system. These principles can be used to children and adolescents as well as to adults. Effective instruction should use as many of these principles, if not the whole of her. When you plan a session, see the entire draft to ensure that these principles have been used, and if not, it may need a revision (improvement).
Posted in Education and Training, Knowledge | Leave a comment

java.lang.ClassNotFoundException: org.springframework.web.context.ContextLoaderListener

I had a similar problem when running a spring web application in an Eclipse managed tomcat. I solved this problem by adding maven dependencies in the project’s web deployment assembly.

  1. Open the project’s properties (e.g., right-click on the project’s name in the project explorer and select “Properties”).
  2. Select “Deployment Assembly”.
  3. Click the “Add…” button on the right margin.
  4. Select “Java Build Path Entries” from the menu of Directive Type and click “Next”.
  5. Select “Maven Dependencies” from the Java Build Path Entries menu and click “Finish”.

You should see “Maven Dependencies” added to the Web Deployment Assembly definition.


Posted in Integration, Problem solving, Programming, Uncategorized | Leave a comment

Basic Strategy for Algorithmic Problem Solving

The strategy consists of five big steps:

  1. Read and comprehend the problem statement.
  2. Select theoretical concepts that may be applied.
  3. Qualitative description of the problem.
  4. Formalization of a solution strategy.
  5. Test and description of the solution.

Each step has attached a questionnaire, which contain questions that will lead you toward the solution of the problem or, if needed, to step back and review your work.

This document is based on the paper: Cabral, Luis G. et al. “Solucion de Problemas”. Contactos Vol II, No. 8. Oct-Dic 1985. pp.42-51. UAM-I, ciencias basicas e ingenieria, Mexico.

General Problem Solving Strategy

Guiding-questionnaires to be used with the General Strategy for algorithm creation

Guide 1

  1. Do you understand every word used within the problem statement?
  2. What computational elements are relevant to the problem?
  3. What non-computational elements are relevant to the problem: Mathematics, Physics, Geography, etc.
  4. Use your own words to describe the problem. If needed, make a drawing depicting the situation stating clearly relevant objects and times.
  5. Have you solved any similar problem? If so, take advantage of that experience and its information.
  6. What data or resources are provided within the statement?
  7. What data or results are requested within the statement?
  8. Check answers 6 and 7 and decide if they are consistent with your answers 2 and 3.

Guide 2

  1. Identify all theoretical (and empirical) concepts related with the problem.
  2. Select a structure able to simplify data handling: arrays, records, files, local variables, global variables, linked lists, etc.
  3. Identify the kind of problem(s) according with its (their) structure: sequential, selection, iterative.
  4. Identify available algorithmic elements and select: what you need: well-defined instructions, already known algorithms, etc.
  5. Is it possible to simplify the problem by dividing it into simpler cases and selecting a different approach for each one? Is it possible eliminate redundant or unnecessary data?

Guide 3

  1. Do you know any hand-written way to solve the problem? If so, propose several examples and solve them “by hand”, then attempt to create a generalization. In order to do that, carefully think on each step performed and watch what actions are common to every example.
  2. Make a list of variable elements, specifying their magnitude and measurement units. Associate them proper symbols or names but take care of avoid their repetition
  3. Which principles or relationships apply to the problem?
  4. Write down the selected relationships but using your own variables (symbols or names). If needed, describe equations with words.
  5. Are all variables in use? Are there as many relationships as unknown variables?
  6. Are you using all the information available in the problem statement? If not, select just the important.

Guide 4

  1. Describe your solution qualitatively (you can start by making a narration.)
  2. Make some predictions regarding the expected result based only upon the description you made. Do not assume anything that is not in your description.
  3. Make the required relationships and check that the result comes from the selected variables. (Keep in mind the measurement units.)
  4. Substitute values (with their corresponding signs and units) at the end of your development of relationships.
  5. Transform your description into an algorithm (pseudocode or flowchart). Remember, the algorithm must ask unknown values, show main results and store (in variables) the results of relationships and formulas.

Guide 5

  1. Manually compute the result (i.e. perform a hand-trace.) If needed, draw plots that describe the behavior of the variables.
  2. Follow strictly each step of the algorithm and look at the results. (Someone else can perform this step.)
  3. Are all your predictions from 4.2 accomplished? Measurement units are preserved?
  4. Do the units make sense?
  5. Is reasonable the order of magnitude of results?
  6. Does it work for boundary values?
  7. Do every variable has an initial value?
  8. Interpret the result to write down an explanation of it (how it was produced) and assign units.
Posted in Education and Training, Problem solving, Uncategorized | Leave a comment

All About TransactionScope

1. Introduction


TransactionScope is a very special and important class in the .NET Framework. Supporting transactions from a code block is the main responsibility of this class. We often use this class for managing local as well as distributed transactions from our code. Use of TransactionScope is very simple and straightforward. It is very reliable and easy to use. For this reason it is very popular among .NET developers. In this article, I explain transaction related theory with code sample, and show various scenarios where we can use TransactionScope with various options for managing real life transactions.

2. Background

Transaction management is very, very important to any business application. Each and every large scale development framework provides a component for managing transactions. .NET Framework is a large development framework and it also provides its own transaction management component. Before the launch of .NET Framework 2.0 we used SqlTransaction to manage transactions. From version 2 .NET Framework has the TransactionScope class. This class is available in the System.Transactions assembly. This class provides a transactional framework with the help of which any .NET developer can write transactional code without knowing much details. For this reason it is very popular among .NET developers and they widely use it for managing transactions. But the story is not finished yet. I will say the story has only just started.

In the real world any one you will find exceptional scenarios, exceptional issues where only a knowledge of how to use TransactionScope is not good enough. To resolve transactional issues like deadlocks, timeouts, etc., you must know each and every concept directly/indirectly related to a transaction. There is no alternative. So the concepts of a transaction and its related components need to be clear.

3. How to Use TransactionScope

Use of TransactionScope in a .NET application is very, very simple. Any one can use it by following these steps:

  1. Add a System.Transactions assembly reference to the project.
  2. Create a transactional scope/area with the help of the TransactionScope class starting with a usingstatement.
  3. Writing code which needs to have transactional support.
  4. Execute the TransactionScope.Complete method to commit and finish a transaction.

Really, as simple as that. But in a real life project only that knowledge is not sufficient. You need more transaction related knowledge, otherwise you can not handle transaction related issues. So first of all, we should be clear about the transactional concept.

4. Transaction

What is a transaction? You can find the definition of a transaction from various sources like Wikipedia, other websites, books, articles, blogs. In a very short, we can say, a series/number of works treated as a whole, either completed fully or not at all.

Example: Transfer money from Bank Account-A to Account-B

Series of (actually two) tasks/processes:

  1. Withdraw amount from Account-A
  2. Deposit that amount to Account-B

We understand that transfer of money (from Account-A to Account-B) consists of two individual processes. Transferring money will only be accurate and successful if both the processes are individually successful. If that is not happening, suppose process-1 succeeds but process-2 fails, then the money will be deducted from Account-A but not deposited to Account-B. If that happens, it will be very bad and no one will accept it.

5. Business Transaction

Business transactions are interactions between Customer/Supplier/StackHolders and other parties who are involved in doing business. In this article I am not going to present anything regarding business transactions.

6. Database Transaction

In software development, when we say transaction by default we guess that it is a database transaction. In a database transaction we can say, a series of data manipulation statements (insert/update/delete) execute as a whole. All statements either successfully execute, or will fail each and every statement, so that the database is in consistent mode. Database transactions actually represent a database state change in an accurate way.

7. Local Transaction

Local Transaction

A transaction where a series of data manipulation statements execute as a whole on a single data source/database. It is actually a single phase transaction handled by a database directly. To manage local transactions, System.Transactions has a Lightweight Transaction Manager (LTM). It acts like a gateway. All transactions are started by System.Transactions are handled directly by this component. If it finds the transaction nature is distributed based on some predefined rules it has a fallback transaction to the MSDTC distributed transaction.

8. Distributed Transaction

Distributed Transaction

A transaction which works with multiple data sources is called a distributed transaction. If a transaction fails then the affected data sources will be rolled back. In System.Transactions, MSDTC (Microsoft Distributed Transaction Coordinator) manages distributed transactions. It implements a two phase commit protocol. A distributed transaction is much slower than a local transaction. The transaction object automatically escalates a local transaction to a distributed transaction when it understands that a distributed transaction is needed. The developer can not do anything here.

9. Distributed Transaction System Architecture

We know that in a distributed transaction, several sites are involved. Each site has two components:

  1. Transaction Manager
  2. Transaction Coordinator

1. Transaction Manager: Maintains a log and uses that log if recovery is needed. It controls the whole transaction by initiating and completing, and managing the durability and atomicity of a transaction. It also coordinates transactions across one or more resources. There are two types of transaction managers.

  1. Local Transaction Manager: Coordinates transaction over a single resource only.
  2. Gloabl Transaction Manager: Coordinates transaction over multiple resources.

2. Transaction Coordinator: Starting the execution of transactions that originate at the site. Distributes subtransactions at appropriate sites so that they can execute in those sites. Coordinates each transaction of each site. As a result the transaction is committed or rolled back to all sites.

10. Connection Transaction

Transaction which is tied directly with a database connection (SqlConnection) is called Connection Transaction. SqlTransaction (IDbTransaction) is an example of a connection transaction. In .NET Framework 1.0/1.1 we use SqlTransaction.

string connString = ConfigurationManager.ConnectionStrings["db"].ConnectionString;
using (var conn = new SqlConnection(connString))
    using (IDbTransaction tran = conn.BeginTransaction())
            // transactional code...
            using (SqlCommand cmd = conn.CreateCommand())
                cmd.CommandText = "INSERT INTO Data(Code) VALUES('A-100');";
                cmd.Transaction = tran as SqlTransaction;
        catch(Exception ex)

11. Ambient Transaction

A transaction which automatically identifies a code block that needs to support a transaction without explicitly mentioning any transaction related things. An ambient transaction is not tied just to a database, any transaction aware provider can be used. TransactionScope implements an ambient transaction. If you see the use of TransactionScope, you will not find transaction related anything sent to any method or setting any property. A code block is automatically attached with the transaction if that code is in any TransactionScope. A WCF transaction is another example of a transaction aware provider. Any one can write a transaction aware provider like the WCF implementation.

12. Transaction Properties

There are four important properties for a transaction. We call them ACID properties. They are:

    1. A-Atomic
    2. C-Consistent
    3. I-Isolation
    4. D-Durable
  1. Atomic: If all parts of the transaction individually succeed then data will be committed and the database will be changed. If any single part of a transaction fails then all parts of the transaction will fail and the database will remain unchanged. Part of the transaction might fail for various reasons like business rule violation, power failure, system crash, hardware failure, etc.
  2. Consistent: Transaction will change the database from one valid state to another valid state following various database rules like various data integrity constraints (primary/unique key, check/not null constraint, referential integrity with valid reference, cascading rules ) etc.
  3. Isolation: One transaction will be hidden from another transaction. In another way we can say, one a transaction will not affect an other transaction if both work concurrently.
  4. Durability: After a transaction is successfully completed (committed to the database), changed data will not be lost in any situation like system failure, database crash, hardware failure, power failure etc.

13. Transaction Isolation Level

Now I will start explaining a very important thing directly related to transactions, and that is transaction isolation level. Why is it so important? First of all, I previously explained that isolation is an important transaction property. It describes each transaction is isolated from another and do not affect other concurrently executed transactions. How does a transaction management system achieve that important feature?

A Transaction Management System introduces a locking mechanism. With the help of this mechanism one transaction is isolated from another. The locking policy behaves differently based on the Isolation level set for each transaction. There are four very important isolation levels in .NET transaction scope. These are:

    1. Serializable
    2. Repeatable Read
    3. Read Committed
    4. Read UnComitted

Before I start explaining isolation levels, I need to explain data reading mechanizm inside a transaction. This data reading mechanism is very important to understand isolation levels properly.

  • Dirty Read: One transaction reads changed data of anohter tranaction but that data is still not committed. You may take decission/action based on that data. A problem will arise when data is rolled-back later. If rollback happens then your decision/action will be wrong and it produces a bug in your application.
  • Non Repeatable Read: A transaction reads the same data from same table multiple times. A problem will arise when for each read, data is different.
  • Phantom Read: Suppose a transaction will read a table first and it finds 100 rows. A problem will arise when the same tranaction goes for another read and it finds 101 rows. The extra row is called a phantom row.

Now I will start explaining in short the important isolation levels:

  1. Serializable: Highest level of isolation. It locks data exclusively when read and write occurs. It acquires range locks so that phantom rows are not created.
  2. Repeatable Read: Second highest level of isolation. Same as serializable except it does not acquire range locks so phantom rows may be created.
  3. Read Committed: It allow shared locks and read only committed data. That means never read changed data that are in the middle of any transaction.
  4. Read Un-Committed: It is the lowest level of Isolation. It allows dirty read.

Now I will start explaining TransactionScope and its usage pattern:

14. TranactionScope Default Properties

It is very important to know about the default properties of the TransactionScope object. Why? Because many times we create and use this object without configuring anything.

Three very important properties are:

  1. IsolationLevel
  2. Timeout
  3. TransactionScopeOptions

We create and use TransactionScope as follows:

using (var scope = new TransactionScope())
    //transctional code…

Here the TransactionScope object is created with the default constructor. We did not define any value for IsolationLevelTimeout, and TransactionScopeOptions. So it gets default values for all three properties. So now we need to know what the default property values of these properties.

Property Default Value Available Options
IsolationLevel Serializable Serializable, Read Committed, Read Un Committed, Repeatable Read
Timeout 1 Minute Maximum 10 Minutes
TransactionScopeOption Required Required, Required New, Suppress
  1. Isolation Level: It defines the locking mechanism and policy to read data inside another transaction.
  2. Timeout: How much time object will wait for a transaction to be completed. Never confuse it with the SqlCommand Timeout property. SqlCommand Timeout defines how much time the SqlCommand object will wait for a database operation (select/insert/update/delete) to be completed.
  3. TransactionScopeOption: It is an enumeration. There are three options available in this enumeration:
No Option Description
1 Required It is default value for TransactionScope. If any already exists any transaction then it will join with that transaciton otherwise create new one.
2 RequiredNew When select this option a new transaction is always created. This transaction is independent with its outer transaction.
3 Suppress When select this option, no transaction will be created. Even if it already

How to know the default values of these properties?

The System.Transactions assembly has two classes:

  1. Transaction
  2. TransactionManager

These classes will provide default values. Inside TransactionScope, if you run the following code, you will know the default values:

using (var scope = new System.Transactions.TransactionScope())
    IsolationLevel isolationLevel = Transaction.Current.IsolationLevel;
    TimeSpan defaultTimeout = TransactionManager.DefaultTimeout;
    TimeSpan maximumTimeout = TransactionManager.MaximumTimeout;

Is it possible to override the default property values?

Yes, you can. Suppose you want the default value to be 30 seconds and the maximum timeout value to be 20 minutes. If that is the requirement then you can do it using your web config.

    <defaultSettings timeout="30"/>
    <machineSettings maxTimeout="1200"/>
For the machineSettings value, you need to update your machine.config in your server.
<section name="machineSettings" type="System.Transactions.Configuration.MachineSettingsSection,
Custom=null"allowdefinition="MachineOnly"allowexedefinition="MachineToApplication" />

15. Transaction Isolation Level Selection

You need to have a proper knowledge when you use isolation levels. The following table will give you a very basic idea so that you can understand the basics and select the appropriate isolation level for your transaction scope.

Isolation Level Suggestion
Serializable It locks data exclusively at the time of read/write operations. For that reason, many times it may create a deadlock, and as a result you may get a timeout exception. You can use this isolation level for a highly secured transactional application like a financial application.
Repeatable Read Same as Serializable except allows phantom rows. May use in a financial application or a heavily transactional application but need to know where phantom row creational scenarios are not there.
Read Committed Most of the applications you can use it. SQL Server default isolation level is this.
Read Un-Committed Applications with these have no need to support concurrent transactions.

Now I will explain with scenarios, how we can use TransactionScope:

16. Requirement-1

Create a transaction in which isolation level will be read committed and transaction timeout will be 5 minutes.


var option = new TransactionOptions();
option.IsolationLevel = IsolationLevel.ReadCommitted;
option.Timeout = TimeSpan.FromMinutes(5);
using (var scope = new TransactionScope(TransactionScopeOption.Required, option))
    ExcuteSQL("CREATE TABLE MyNewTable(Id int);");                                        

First off, create TransactionOptions and set ReadCommitted and 5 minutes to its IsolationLevel and Timeout property, respectively.

Second, create a transactional block by creating a TransactionScope object with its parameterized constructor. In this constructor you will pass a TransactionOptions object which you created early and the TransactionScopeOption.Required value.

One important note, many times we are confused when using a DDL statement (Data Definition Language) in a transaction and a question arises, will it support DDL transaction? The answer is yes. You can use a DDL statement like create/alter/ drop statement in the transaction. You can even use a Truncate statement inside the transaction.

17. Requirement-2

We need to create a transaction where a database operation will be in my local database and another will be in a remote database.


using (var scope = new TransactionScope())

There is no difference between a local or remote/distributed transaction implementation code in transactions. Previously I said that TransactionScope implements ambient type transaction. This means, it automatically marks code blocks that need to support a transaction, local or remote. But you may find an error when working with distributed transactions. The error message will be like:

The partner transaction manager has disabled its support for remote/network transaction.

If you find that type of exception, you need to configure security settings, both your local and remote servers, for MSDTC, and make sure services are running.

To find the MSDTC configuration interface, you will go to:

ControlPanel > AdministritiveTools >ComponentServices > DistributedTransactionCoordinator > LocalDTC

Some options for the Security tab are described bellow:

Property Name Description
Network DTC Access If not selected, MSDTC will not allow any remote transaction
Allow Remote Clients If it is checked, MSDTC will allow to coordinate remote clients for transaction.
Allow Remote Administration Allow remote computers to access and configure these settings.
Allow Inbound Allow computers to flow transaction to local computers. This option is needed where MSDTC is hosted for a resource manager like SQL Server.
Allow Outbound Allow computers to flow transaction to remote computers. It is needed for a client computer where transaction is initiated.
Mutual Authentication Local and Remote computers communicate with encrypted messages. They establish a secured connection with the Windows Domain Account for message communication.
Incoming Calling Authentication Required If mutual authentication cannot be established but the incoming caller is authenticated then communication will be allowed. It supports only Windows 2003/XP ServicePack-2.
No Authentication Required It allows any non-authenticated non-encrypted communication.
Enable XA Transaction Allows different operating systems to communicate with MSDTC with XA Starndard.
DTC Logon Account DTC Service running account. Default account is Network Service.

18. Distributed Transaction Performance

Distributed transactions are slower than local transactions. A two phase commit protocol is used for managing distributed transactions. A two phase commit protocol is nothing but an algorithm by which a distributed transaction is performed. There are three commit protocols that are mostly used:

  1. Auto Commit: Transaction is committed automatically if all SQL statements are executed successfully or rolled-back if any of them fails to execute.
  2. Two Phase Commit: Transaction waits before final commit for messages from all other parties involved in transaction. It locks resources before commit or rollback. For this reason it is called a blocking protocol. In terms of performance it is the reason it is much slower. It is a widely used protocol for managing distributed transactions.
  3. Three Phase Commit: Transaction is finally committed if all nodes are agreed. It is a non-blocking protocol. In terms of performance it is faster than the two phase commit protocol. This protocol is complicated and more expensive but avoids some drawbacks in the two phase commit protocol.

19. Requirement-3

I want to create a transaction inside another transaction.


string connectionString = ConfigurationManager.ConnectionStrings["db"].ConnectionString;
var option = new TransactionOptions
     IsolationLevel = IsolationLevel.ReadCommitted,
     Timeout = TimeSpan.FromSeconds(60)
using (var scopeOuter = new TransactionScope(TransactionScopeOption.Required, option))
    using (var conn = new SqlConnection(connectionString))
        using (SqlCommand cmd = conn.CreateCommand())
            cmd.CommandText="INSERT INTO Data(Code, FirstName)VALUES('A-100','Mr.A')";
    using (var scopeInner = new TransactionScope(TransactionScopeOption.Required, option))
        using (var conn = new SqlConnection(connectionString))
            using (SqlCommand cmd = conn.CreateCommand())
                cmd.CommandText="INSERT INTO Data(Code, FirstName) VALUES('B-100','Mr.B')";

No problems in creating a transaction inside anohter (nested) transaction. You should define the behaviour or the inner transaction. This behaviour is dependent on the value of TransactionScopeOption. If you select Required as TransactionScopeOption, it will join its outer transaction. That means if the outer transaction is committed then the inner transaction will commit if the outer transaction is rolled back, then the inner transcation will be rolled back. If you select the RequiredNew value of TrasnactionScopeOption, a new transaction will be created and this transaction will independently be committed or rolled back. You must be clear about those concepts before working with nested transactions using TransactionScope.

20. Requirement-4

I want to call rollback explicitly from a transaction.


using (var scope = new TransactionScope())
    //either 1 of following lines will use
    //if you comment the following line transaction will
    //automatically be rolled back

If you do not call the TransactionScope.Complete() method then the transaction will automatically be rolled back. If you need to explicitly call rollback for some scenarios, then you have two options:

  1. Executing Transaction.Current.Rollback() will rollback the current transaction.
  2. Executing TransactionScope.Dispose() will also rollback the current transaction.

Just one thing: remember that if you explicitly call Transaction.Rollback() or TranactionScope.Dispose() then you should not call the TransactionScope.Complete() method. If you do so then you will get an ObjectDisposeException.

“Cannot access a disposed object. Object name ‘TransactionScope'”

21. Requirement-5

I want to create a file/folder dynamically inside a transaction scope. If my transaction is rolled back then I want that created file/folder to be removed automatically, like a database row.


string newDirectory = @"D:\TestDirectory";
string connectionString = ConfigurationManager.ConnectionStrings["db"].ConnectionString;
using (var scope = new TransactionScope())
    using (var conn = new SqlConnection(connectionString))
        using (SqlCommand cmd = conn.CreateCommand())
            cmd.CommandText = "Insert into data(Code) values ('A001');";

TranactionScope is not limited for only databases. It will support other data sources like FileSystem, MSMQ, etc. But you need more work to support TranactionScope. First of all what I show in the above code block will not work. Why? Because that directory creation and file creation will not be marked for transaction by default. Then what do we need to do?

public interface IEnlistmentNotification
    void Commit(Enlistment enlistment);       
    void InDoubt(Enlistment enlistment);      
    void Prepare(PreparingEnlistment preparingEnlistment);        
    void Rollback(Enlistment enlistment);

The System.Transactions namespace has an interface named IEnlistmentNotification. If I want my component/service to be transaction aware then I need to implement that interface. The following code will show a very simple and straightforward way to implement this:

public class DirectoryCreator : IEnlistmentNotification
    public string _directoryName; 
    private bool _isCommitSucceed = false;
    public DirectoryCreator(string directoryName)
        _directoryName = directoryName;
        Transaction.Current.EnlistVolatile(this, EnlistmentOptions.None);
    public void Commit(Enlistment enlistment)
        _isCommitSucceed = true;
    public void InDoubt(Enlistment enlistment)
    public void Prepare(PreparingEnlistment preparingEnlistment)
    public void Rollback(Enlistment enlistment)
        if (_isCommitSucceed))

The above class will create a directory (folder) and this component is transaction aware. We can use this class with any TranactionScope and if TranactionScope is committed the directory will be created, otherwise it will be deleted (if already created). I show here just the diretory creation, if you want you can create a class/component for file creation. Now, how to use this class in the transactions scope?

string newDirectory = @"D:\TestDirectory";
string connectionString = ConfigurationManager.ConnectionStrings["db"].ConnectionString;
using (var scope = new TransactionScope())
    using (var conn = new SqlConnection(connectionString))
        using (SqlCommand cmd = conn.CreateCommand())
            cmd.CommandText = "Insert into data(Code) values ('A001');";
    var creator = new DirectoryCreator(newDirectory);

Now, it will work!

Transactional NTFS(TxF) .NET is a open source project. You can use this library for creating/writing/coping file/directory inside transactionscope and it will support transaction automatically.

  • You first need to download component from
  • Add that component as reference to your project.
  • Use component api to your transactional block.

Txf API usage code sample:

using (var ts = new System.Transactions.TransactionScope())
    using (var conn = new SqlConnection(connectionString))
        using (SqlCommand cmd = conn.CreateCommand())
            cmd.CommandText = "Insert into data(Code) values ('A001');";
    TxF.Directory.CreateDirectory("d:\\xyz", true);
    TxF.File.CreateFile("D:\\abc.txt", File.CreationDisposition.OpensFileOrCreate);

TxF component supports:

  • Create/Delete Directory
  • Create/Delete File
  • Read/Write File
  • Copy File

22. Points of Interest

Transaction Management is actually a huge subject. It is a very complex subject too, specially distributed transaction. I tried my level best to present it as simple as possible. If you want to get all transaction related knowledge then you should study more on that. I suggest you read research papers on transactions, specially distributed transactions.

You can also explore more regarding transaction aware service/components. I showed here a very simple way to implement them. But in real life you may face difficult scenarios. So you need to prepare for that. In the near future Microsoft may add transaction aware components like dictionary/filesystem/directory service, etc. If it that happens then developers’ life will be more easier.

Sample Source Code

I have attached a source code sample with this article. I wrote this source code with the help of Visual Studio 2012 with .NET Framework 4.5. I added a Unit Test project so that you can debug/test the code and properly understand it.


Posted in ASP.NET MVC, C#, Database, Uncategorized | Leave a comment

Moq – Mock Database


Moq is a very useful framework which easily mocks service calls and methods for your unit testing.

This article helps you to understand Moq with respect to mocking a database (i.e. writing unit test cases for your repository project).

Here I have used Microsoft Enterprise Library objects (to make it easy to understand) you can very well extend it to any other framework, util or ADO.NET methods. I will also try to cover some advanced concepts used in Moq like anonymous methods, Callback() and Queueing.


I have been using Moq since last, almost an year and found that many people struggle or find difficult to mock databases. Many of us use Dev instance of database and make our test cases call the actual SQL Instance.

Using the code

First things first – Your repository should have a constructor (or a public property)  through which you can pass the mocked database object from the unit test.

Below is sample of such constructor:-

public MyRepository(Databse Db)
  this.database = Db;

Below is sample of an “ExecuteScalar” method (it returns number of employees in a certain location).

using (DbCommand cmd = database.GetStoredProcCommand(SPCREATEPRODUCTLIST))           
    this.database.AddParameter(cmd, "@Location", DbType.String, 0, 
      ParameterDirection.Input, true, 0, 0, "Location", DataRowVersion.Default, location);
    object result = database.ExecuteScalar(cmd);

This is how you can mock a scalar method:

private static Mock<Database> MockExecuteScalar(object returnValue)
   Mock<DbProviderFactory> mockedDBFactory = new Mock<DbProviderFactory>();
   Mock<Database> mockedDB = new Mock<Database>("MockedDB", mockedDBFactory.Object);
   mockedDB.Setup(x => x.ExecuteScalar(It.IsAny<DbCommand>())).Returns(returnValue);
   return mockedDB;

(You can read more about Enterprise library and its implementations at ).

This is quite straight forward, this method mocks the “ExecuteScalar” method (since this method is mentioned as virtual in Database class you are able to mock it. You can only mock Interfaces easily, while mocking a class you can only mock virtual properties and  methods)

Below is how you will call this in your unit test case:

Database mockedDB = MockExecuteScalar("5").Object;
MyRepository target = new MyRepository(mockedDB);
var result = target.GetEmployeeCount("London");

In the same way you can mock “ExecuteNonQuery” implementations:

private static Mock<Database> MockExecuteNonQuery(object returnValue)
   Mock<DbProviderFactory> mockedDBFactory = new Mock<DbProviderFactory>();
   Mock<Database> mockedDB = new Mock<Database>("MockedDB", mockedDBFactory.Object);
   mockedDB.Setup(x => x.ExecuteNonQuery(It.IsAny<DbCommand>())).Returns(1);         
   return mockedDB;

Now, let’s move on to “ExecuteReader” implementations. ExecuteReader can be a collection of rows and we loop the DataReader stream till the end of data. So here there are two functions to mock.

  1. ExecuteReader() – To get the actual data
  2. Read() – To return true till we get the desired data

Below is example of a typical implementation using “ExecuteReader“:

using (DbCommand cmd = database.GetStoredProcCommand("GetEmployeeDetails", parameters))
    using (IDataReader dr = database.ExecuteReader(cmd))
        while (dr.Read())
           listofEmployeeDetails.Add(new Employee
                EmployeeId = dr["EmpID"].ToString();
                EmployeeName = dr["EmployeeName"].toString();
                Location = dr["Location"].toString(); 

First let’s see a simple example where  we will mock “ExecuteReader” to return a single row of data from our MockedDatabase:

Step 1: Mock “Read” method

Before mocking read method I would like to brief you about anonymous methods in Moq functions andCallback() method.


We have already seen the .Returns() method which returns response for a mocked function call. If you want to execute your custom logic after the control comes back from Return() you can use Callback().

This will look something like below:

mockedObject.Setup(x=>x.myMethod(It.IsAny<string>())).Returns("Hello").Callback(//custom logic goes here);

Anonymous Methods

Anonymous methods come handy you are calling a mocked method multiple times and want to change the return value  dynamically.

Below is an example:

string returnValue = "Hello"  

When we call “myMethod” for the very first time, the return value will be “Hello” from second time onward it will return “World”. You can put any conditions or your custom implementation inside this anonymous method to suit your needs.

Now in this scenario we want “ExecuteReader” method to read one row of data. So in that casedataReader.Read() method should return true 1st time only.

So, we can mock .Read() method like:

var mockedDataReader = new Mock<IDataReader>();
bool readFlag = true;
mockedDataReader.Setup(x => x.Read()).Returns(() => readFlag).Callback(() => readFlag = false);

Step 2: Mock ExecuteReader

Before we mock “ExecuteReader” method we need to setup the response data. So when I call dr[“EmpID”]

I get my desired mocked value. We can achieve this like below:-

mockedDataReader.Setup(x => x["EmpID"]).Returns("43527");  
mockedDataReader.Setup(x => x["EmployeeName"]).Returns("Smith");  
mockedDataReader.Setup(x => x["Location"]).Returns("London");

Now we will mock the “ExecuteReader” method, which will return our mocked object.

Mock<DbProviderFactory> mockedDBFactory = new Mock<DbProviderFactory>();
Mock<Database> mockedDB = new Mock<Database>("MockedDB", mockedDBFactory.Object);
mockedDB.Setup(x => x.ExecuteReader(It.IsAny<DbCommand>())).Returns(mockedDataReader.Object);

The above way is same like “ExecuteScalar” and “ExecuteNonQuery” but here we are returning our customDataReader object. Below is how the complete method will look like.

private static Mock<Database> MockExecuteReader(Dictionary<string, object> returnValues)
    var mockedDataReader = new Mock<IDataReader>();
    bool readFlag = true;
    mockedDataReader.Setup(x => x.Read()).Returns(() => readFlag).Callback(() => readFlag = false);
    foreach (KeyValuePair<string, object> keyVal in returnValues)
        mockedDataReader.Setup(x => x[keyVal.Key]).Returns(keyVal.Value);
    Mock<DbProviderFactory> mockedDBFactory = new Mock<DbProviderFactory>();
    Mock<Database> mockedDB = new Mock<Database>("MockedDB", mockedDBFactory.Object);
    mockedDB.Setup(x => x.ExecuteReader(It.IsAny<DbCommand>())).Returns(mockedDataReader.Object);
    return mockedDB;

There might be cases where you want to select multiple rows from database.

Before I start explaining about mocking multiple rows, let me explain one tricky thing in Return() function.

Let say I mocked a method, which I am calling multiple times in my code.


The code above might look OK at first glance. But it will give “Third” as output every time.

Here anonymous functions come real handy but we need to ensure that we get output in certain order. We can achieve it by using Queue. The code will look something like this:-

Queue<object> responseQueue = new Queue<object>();

If you observe the Returns() method will now invoke an anonymous method which will dequeue the values one by one.

For returning multiple rows we will need something similar where we need to Dequeue() each row one by one. The completed method will look like below:-

private static Mock<Database> MockExecuteReader(List<Dictionary<string, object>> returnValues)
  var mockedDataReader = new Mock<IDataReader>();
  int count = 0;
  Queue<object> responseQueue = new Queue<object>();
  mockedDataReader.Setup(x => x.Read()).Returns(() => count<returnValues.Count).Callback(() => count++);
  returnValues.ForEach(rows =>
   foreach (KeyValuePair<string, object> keyVal in rows)
       mockedDataReader.Setup(x => x[keyVal.Key]).Returns(()=>responseQueue.Dequeue());
  Mock<DbProviderFactory> mockedDBFactory = new Mock<DbProviderFactory>();
  Mock<Database> mockedDB = new Mock<Database>("MockedDB", mockedDBFactory.Object);
  mockedDB.Setup(x => x.ExecuteReader(It.IsAny<DbCommand>())).Returns(mockedDataReader.Object);
  return mockedDB;

If you observe the mocking of Read(), it is based on the length of the no. of mocked datarows you want to return (length of the List<>).

On each Callback() a local variable count is incremented so that when it exceeds the number of datarows Read()method will return false.

You can implement the techniques of anonymous methods, Callback method and Queuing in all your unit tests. While mocking Repositories you can use these generic methods to Mock your databases.

Posted in ASP.NET MVC, C#, Database, Uncategorized | Leave a comment