Top 7 nicely looking free open-source Angular projects


  • CoreUI Angular2 (Stars 499). Nice thing with numerous useful icons that supports many libraries, as HTML5 AJAX, HTML5 Static, AngularJS, Angular 2+, React.js & Vue.js.
  • SB-Admin-BS4-Angular-6 (Stars 1.454). This simple and very neat admin platform is based on Angular 6 and Bootstrap 4. Looks really usable. As Ngx-admin it also has RTL/LTR feature, so a wider audience can apply it to their projects.
  • Ng-dashboard (Stars 10). It looks very simple and structured, but does not support that many libs as the others. The dashboard is not supposed to be too dynamic but surely can contribute enough to your next project.
  • CDK-admin (Stars 82). A very powerful open source admin dashboard that was generated with tool Angular CLI version 1.5.0. It’s built on Angular 5 and provides a range of responsive, reusable, and commonly used components.
  • PaperAdmin (Stars 35). We liked its simplicity. The dashboard has nice soft color combinations applied to tables, graphics and icons.
  • Material Dashboard Angular2 (Stars 339) – fully responsive, stylish and free. As many users all over the world we also love this one. The way it looks – comes with 5 color filter choices for both the sidebar and the card headers.
  • To finish the list we decided to remind you of ngx-admin (Stars 11.659). In case you missed some freshly new features and bug fixes. It is still a free admin dashboard template based on Angular 6+, Bootstrap 4, that has numerous unique features and icons.

We hope you found some nice stuff out from the links and put some of them on your next project. Good luck!

Posted in Education and Training, Integration, Knowledge, Problem solving, Programming, Technology, Uncategorized | Leave a comment

Creating your first dashboard

2.1. Creating a data provider

The first thing to do, after you have accessed the web application, is to create a data provider. On the left menu, go to ‘Administration > Data providers‘ and once there, select the option ‘Create new data provider‘.

The purpose of data providers is to gather information from any system, either a database, a file or any other, and transform it to the internal in-memory representation for building dashboards. As you may guess there exists different data sources and therefore different ways to retrieve as we’re going to see next.

Data providers table

Figure 2.1. Data providers table

2.1.1. Retrieving data from a CSV file

Click on ‘Create new data provider‘. The following fields will be shown in the form, with some sensible defaults:

Once you have filled all the fields, click on ‘Try’, to check that everything works properly. The application will give you a message ‘Correct data set ...’ and we continue by pressing ‘Save‘.

CSV data provider creation form

Figure 2.2. CSV data provider creation form

Next, a screen is shown with all the fields found after parsing the file, giving us the option to change the name of each field. For numeric fields it gives us the option to specify if we want numeric values to be treated as labels by the dashboard engine. This is something really useful when dealing with numbers which actually behave as labels, f.i: the numeric reference code of a product item.

Data provider properties configuration panel

Figure 2.3. Data provider properties configuration panel

After this last step, you can save and finish the creation of your new data provider.

New data provider instance has been created

Figure 2.4. New data provider instance has been created

2.1.2. Reading data from an SQL query

You can create a data provider to query a relational database. Go to Administration > Data providersand click on ‘Create new data provider‘. Choose the ‘SQL Query‘ option and fill the form with the data provider name and the SQL query that will retrieve the data.

New SQL data provider form

Figure 2.5. New SQL data provider form

In this form you have the ability to select the data source where the data comes from. By default the local data source is selected but you can define new connections to external data sources. To do this you should go to the ‘Administration > External connections‘ section and from there you can create a new data source connection.

Data sources management panel

Figure 2.6. Data sources management panel

New data source creation form

Figure 2.7. New data source creation form

Let’s get back to the creation of our SQL data provider. Once the data source has been selected and the query is typed in, you can click on the ‘Try’ button, and if the query is successful you will get the following message.

SQL query input filed

Figure 2.8. SQL query input filed

After that, you can rename the name of the properties to provide a more user friendly name.

SQL provider columns configuration panel

Figure 2.9. SQL provider columns configuration panel

Finally, just click the ‘Save‘ button to confirm the creation of the data provider:

2.1.3. Dealing with high volume databases

The previous sections showed how data could be loaded from plain text like CSV files or query from a database connection. When data is small enough, Dashbuilder can handle pretty well the small data sets in memory as far as it doesn’t exceed the 2MB size limit. However, must of the time, our data sets are bigger and we can’t upload all the data for Dashbuilder to handle it by its own. Is in these cases where database backed queries can help us to implement nice drill down reports and charts without preloading all the data.

Imagine a database containing two tables:

Stock trade tables

Figure 2.10. Stock trade tables

Now, let’s take as an example a very simple example of a stock exchange dashboard which is fed from the two tables above. The dashboard displays some indicators about several companies from several countries selling their shares at a given price on every day closing date. The dashboard displays 4 KPIs as you can see in the following screenshot:

Stock trade dashboard

Figure 2.11. Stock trade dashboard

All the indicators are displaying data coming from the two database tables defined above.

  • Bar chart – Average price per company
  • Area chart – Sales price evolution
  • Pie chart – Companies per country
  • Table report – Stock prices at closing date

What we’re going to start discussing next is the two strategies we can use for building a dashboard. This is an important aspect to consider, specially if we’re facing big data scenarios. The in-memory strategy

This strategy consists in creating a data provider which load all the data set rows by executing a single SQL query over the two tables.


Every single indicator on the dashboard share the same data set. When filters are executed from the UI no further SQLs are executed since all the calculations are done over the data set in memory.


  • Data integration logic keeps very simple
  • Only a single data provider is needed
  • Faster configuration of KPIs since all the data set properties are available at design time
  • Multiple indicators from a single data provider


  • Can’t be applied on medium/large data sets due to poor performance The native strategy

The native approach consists in having a data provider for every indicator in the dashboard instead of loading an handling all the data set in memory. Every KPI is told what data has to display. Every time the user filters on the dashboard, the SQLs are parsed, injected with the filter values and re-executed. No data is hold in memory, the dashboard is always asking the DB for the data.

The SQL data providers are the following:

  • Bar chart – Average price per company
              WHERE {sql_condition, optional,, country}
              AND {sql_condition, optional,, name}
              GROUP BY C.NAME
  • Area chart – Sales price evolution
              WHERE {sql_condition, optional,, country}
              AND {sql_condition, optional,, name}
  • Pie chart – Companies per country
              FROM COMPANY
              WHERE {sql_condition, optional, country, country}
              AND {sql_condition, optional, name, name}
              GROUP BY COUNTRY
  • Table report
              WHERE {sql_condition, optional,, country}
              AND {sql_condition, optional,, name}

As you can see every KPI is delegating the filter & group by operations to the database. The filter magic happens thanks to the {sql_condition} statements. Every time a filter occurs in the UI the dashbuilder core gets all the SQL data providers referenced by the KPIs and it parses/injects into those SQLs the current filter selections made by the user. The signature of the sql_condition clause is the following:

      {sql_condition, [optional | required], [db column], [filter property]}


  • optional: if no filter exists for the given property then the condition is ignored.
  • required: if no filter is present then the SQL returns no data.
  • db column: the db column where the current filter is applied.
  • filter property: the UI property which selected values are taken.


  • Support for high volumes of data. The database tables need to be properly indexed though.


  • The set up of the data providers is a little bit more tricky as it requires to create SQL queries with the required filter, group by and sort operations for every KPI.

When designing a dashboard never forget of thinking thoroughly about the origin, type and the volume of the data we want to display in order to go for the right strategy.

2.2. Creating a KPI

Once the necessary data providers have been created, you can continue by adding a new Key Performance Indicator to an existing page. All dashboards are created by adding indicators and other types of panels to pages. A dashboard is a page with a mixture of different kind of panels placed into it. The following screenshot shows an example of a Sales Dashboard:

Sales dashboard example

Figure 2.12. Sales dashboard example

Pages can be created from scratch, or duplicating an existing one. Both options are explained in the following sections. Meanwhile we will assume the page already exists and we only want to add an indicator.

Page layout with en empty region on the center

Figure 2.13. Page layout with en empty region on the center

Indicators are a special type of panels, which are the widgets that can be placed within the page. To add a panel or indicator just click on the toolbar icon ‘Add panel to the current page‘:

This will make a popup be shown with all the available panels:

Panel instance selector

Figure 2.14. Panel instance selector

To add a new ‘Key Performance Indicator‘, click onDashboard > Key Perfomance Indicator. Drag the ‘Create panel‘ option and drop it into any of the page regions. You will see that they are being highlighted while you move the mouse over them, then simply drop the panel.

Drag and drop of a panel instance into an empty region

Figure 2.15. Drag and drop of a panel instance into an empty region

Once dropped, the first step is to select the Data Provider you need to use, as configured before, to feed the charts and reports with data. Select any of the data providers and then you can start creating a new indicator.

KPI creation - Data provider selector

Figure 2.16. KPI creation – Data provider selector

Now, you must see the chart edition panel. It’s an intuitive environment which helps you configure different type of charts and reports…

KPI configuration panel

Figure 2.17. KPI configuration panel

  • Domain (X Axis): The data column that is taken as the X axis. In this example, we choose the property ‘Country’.
  • Range (Y Axis): Information to be grouped and aggregated for every domain interval. For example: ‘Amount’.
  • Renderer: The rendering library to use. Each one provides different features and visualization style. By default ‘NVD3’.
  • Sort intervals by: It’s how the domain values can be sorted, for example, according to its range value.
  • Sort order: It can be ascending or descending.

To finish the panel edition just close the panel edition window. If you want to get back again, just click on the right upper corner of the panel area and select the ‘Edit content‘ menu option.

Panel administration menu - Edit content option

Figure 2.18. Panel administration menu – Edit content option

The system provides you with 3 types of chart displayers: barpie and line, and a special table displayervery useful to create tabular reports. The system also comes with 2 rendering engines: NVD3 (pure HTML5/SVG technology) and a Flash based one, Open Flash Chart 2. Each renderer has its own available features, so depending on the type of chart and renderer choosen you can end up with some display features enabled or disabled depending the case. For instance, the ‘Paint area‘ feature is not available for OFC2 line charts.

2.3. Composing a dashboard

A dashboard is basically a page with some KPIs placed on it (plus some other additional widgets as we will see later on). There are two different ways of creating a page:

Starting as a blank page:

Duplicating an existing page:

You will find these icons at the top of the page, in the administration bar:

To create a new page, click on the ‘Create new page‘ icon:

A form will be shown to fill in some parameters:

Page creation form

Figure 2.19. Page creation form

  • Page title.
  • Parent page: Pages are organized in a hierarchical way. This is the parent page.
  • Skin: This will select and specific look’n’feel and CSS stylesheet for this page. You can leave the default value.
  • Envelope: Defines which kind of HTML template will be placed around the page layout.
  • Region layout: This is the template, that is, how regions are organized to place the panels inside the page. We can choose any of the installed types, for example, “Demo – Default template with sliding”.

New page item into the page tree administration screen

Figure 2.20. New page item into the page tree administration screen

Most of these properties will be discussed in the chapter about ‘Customing look’n’feel‘. After creating the page, you might realize the page is still not accessible from the left menu but you can see it in the combo list in the administration toolbar:

Brand new empty page

Figure 2.21. Brand new empty page

If you want this page to be shown in the left menu, you can click on ‘Edit content’ and then add the newly created page to the list of options displayed in the menu.

Repeat until the page has all the content and panels required. After dropping the panels into the right regions and configuring them, you might be able to create dashboards that look like the following one:

Panel composition for a typical dashboard

Figure 2.22. Panel composition for a typical dashboard

As you can see, a dashboard is usually composed by one or more instances of the following panel types:

  • Dashboard > Key Performance Indicator
  • Dashboard > Filter & Drill-down
  • Navigation > Tree menu
  • Navigation > Language menu
  • Navigation > Logout panel

2.3.1. Duplicating a page

As mentioned earlier, another way to create new pages is to copy an existing one. We can do that via the ‘Duplicate current page‘ icon which is a much faster way to create pages. After clicking on the clone icon located at the toolbar, a page similar to the image below will be shown. From there we can select those instances we want to duplicate and those we want to keep as is (to reuse).

Wizard for page cloning

Figure 2.23. Wizard for page cloning

Once finished, press the ‘Duplicate page‘ button and a brand new page will be created with the same name as the original one but starting with the prefix ‘Copy of‘. Notice that if a panel instance is reused then any changes made to it will be reflected on all the pages where such instance is being used. this is a cool feature when we are defining for instance our navigation menus since we can define a single‘Tree menu’panel and then configure all the pages to display the same menu instance.

2.3.2. Configuring filter and drill-down

The ‘Filter & Drill-down‘ panel allows for the quick definition of dynamic forms that will allow us to navigate troughout the data displayed by the dashboard. Once an instance of the ‘Filter & Drill-down‘ panel is dropped on the oage we just have to select the ‘Edit content‘ option from the panel menu. After that, a popup window similar to the following will be displayed:

Filter panel configuration

Figure 2.24. Filter panel configuration

This is a filter configuration panel where we can set the filter behaviour. Let’s focus first on the middle bottom part of the screen: the Data provider’s property table, which lists the properties of ALL the data providers referred by the KPIs on the page. For example, if we are building a sales dashboard and all its KPIs are built on top of the same data provider called ‘Sales dasboard demo‘ then the system lists all thedata properties of the ‘Sales dashboard demo ‘ provider. Hence, only the properties we select as ‘Visible‘ will be part of the filter form. Aditionally, we can enable the drill-down feature for each property. If enabled then the system will redirect to the target page when the property is selected on the filter form. Below is an screenshot of the filter panel of the ‘Sales dashboard demo‘.

Filter panel of the sales dashboard example

Figure 2.25. Filter panel of the sales dashboard example

Posted in Business Metrics, Integration, Knowledge, Problem solving, Programming, Uncategorized | Leave a comment

What is the best way to declare a global variable in Angular

A shared service is the best approach

export class SharedService {

But you need to be very careful when registering it to be able to share a single instance for your whole application. You need to define it when registering your application:

bootstrap(AppComponent, [SharedService]);

but not to define it again within the providers attributes of your components:

  providers: [ SharedService ], // No

Otherwise a new instance of your service will be created for the component and its sub components.

You can have a look at this question regarding how dependency injection and hierarchical injectors work in Angular2:

You can notice that you can also define Observable properties in the service to notify parts of your application when your global properties change:

export class SharedService {

  constructor() {
    this.globalVarUpdate = Observable.create((observer:Observer) => {
      this.globalVarObserver = observer;

  updateGlobalVar(newValue:string) {
    this.globalVar = newValue;;

See this question for more details:

Posted in Uncategorized | Leave a comment

How to Disable a Dell Laptop Function Key – Change Function key behavior in Dell laptops

With new DELL laptop, you don’t need change function key in BIOS. You can change behavior by pressing Fn and Esc key. That switches behavior of function keys.

Posted in Knowledge, Laptop, Problem solving, Uncategorized | Leave a comment

Low latency html5 WebSocket Server to stream live and computed data to web browser grid (a flavor of SignalR)




Live updates to browser is an important scenario in today’s world.  All businesses (Banking, Trading, Healthcare, Retail etc..) rely heavily on internet/browser based apps to reach their esteemed customers.  Be it stock quote, news, mail – live updates bring agility to your web application.  This article is about html5 WebSocket service that streams real time, live data to browser grid.  Also, this service can compute complex math expressions (formula) dynamically (at runtime.)  Say, for example, you need to live update items such as Stock name, symbol, quantity, price, position (short/long) at real time to user browsers — this service can stream it to browser grid.  This service can handle it better than AJAX/ COMET based grids as it uses low latency TCP communication using html5 WebSocket.   If you would like to add few more items that are computed based on other items in your data – well, this websocket server takes up that work for you behind the scenes.
Let’s consider, you would like to add a new item Total cost (where Total Position = price * quantity * Long/Short), this service computes it automatically.  Given the math expressin formula, this service performs the computations behind the scenes at runtime.  If all this is not enough, it can add new formulas on the fly.  Say, you deployed this grid to production.  Someone thinks your formula is wrong and needs immediate change.  Just modify the formula at server side.  All client browsers will see live updates with right formula.  Isn’t this nice.

CSV to Web browser using html5 websocket

Fig 0: Explains CSV file based data and computed data based on math expression is streamed real-time through low latency html5 websocket to client browser

Quick Demo

There are two Quick Demo binaries included with this article:

How to execute demo binaries

Common steps for checking out both demo(s):

  1. Download the appropriate binaries and copy into a folder/directory
  2. Run wsSocket.exe – click ‘Allow’ if firewall comes up
  3. Open testHTML1.html in your web browser
  4. Click ‘Stream’ button in browser web page
Smart data gridGrid with Formula

This demo is intended to showcase update from CSV file to Browser.  Follow the steps (1) through (4) as above.  Then, follow below steps

  • Open test.csv file in an editor (such as TextPad, Notepad).
  • Modify the value in the CSV file and save it.

You will see the update streaming from CSV file to browser.  The most interesting part is, add a new math expression/ formula column in test.csv file (sample included).  You will see the math expressinon automatically computed by the html5 websocket server and updated at the browser grid

Grid with Random number push:

No further steps required for seeing this demo.  This demo is intended to showcase real-time, live updates streaming from websocket service to client browser.  Random numbers are generated by websocket service and pushed to browser using Html5

Screen shots of demo

Grid with on-the-fly Formula update demo

Fig 1: Data updated in CSV file at server side is getting streamed to web browser at client side.  Note the formula column, which do not have data in CSV file, is automatically computed by websocket service and streamed to web browser automatically

Grid with random number push demo

Fig 2: Random number push using low latency html5 stream

Using the code

Using this websocket service in your application is pretty simple.

  • Download webSocket service source code (click here) and save it to your desired folder/ directory
  • Compile the project webSocket using Visual Studio 2012 (or other versions of Visual studio compiler).  This server code compiles as dynamic library (webSocket.dll)
  • In your application, add webSocket.dll as reference
    • In your Visual Studio application project, right click References.  Click Add reference > browse and select the file webSocket.dll from the directory that you just compiled in step above
  • Add below code at top of your c# file
using wsSocket;
  • Then, you can start streaming with below four lines of code
<code>string </code>data = @"-1,-1, 4,5,
              Item_1, Item_2, Item_3, Computed_1 = Item_1 + Item_2, Computed_2 = Item_2 + Item_3,

         <code>html5Stream </code>wSock = new <code>html5Stream</code>();
// Clients will get update whenever setStreamData method is called
Explanation of above code

First line assigns variable data with streaming content.  Details about this content is explained later.  Next line, creates an object of type html5Stream, named wSock.  Then startServer method is called to start the WebSocket server.  Remember, startServer is a blocking call.  So if you are looking for to stream real-time, live data, then, you need to call setStreamData in separate thread

Output in chrome

Fig 3: Output of above three lines of code in google chrome browser

Explanation of data passed:

First line: -1, -1, 4, 5
-1,  -1 – Reserved for future use
4 – Number of rows
5 – Number of columns

Second line: Item_1, Item_2, Item_3, Computed_1 = Item_1 + Item_2, Computed_2 = Item_2 + Item_3
Item_1, Item_2, Item_3 – Column headers titles
Computed_1, Computed_2 – Header tile and formula to compute these columns.  Note these columns are not filled in next data lines

All other lines: Data that gets to browser.  Please note, the data for the formula fields are not filled.  They are auto computed by the smart grid

Websocket Security

html5 websocket is an evolving standard.  Security is one of the key factor to consider while using websocket.  No enterprise wants the data to be seen by someone in the middle.  Here are few good pointers to consider:

Secure Web socket:  For production, make it mandatory to use wss (secure websocket) instead of ws.  This protects you from attacks such as man in the middle

Client input:  As websocket is TCP connection, make sure to validate the buffer you receive from client.  Such validation may be little overhead in your design.  But it provides you much bigger benefit of attacks such as Sql Injection, buffer overrun etc

Avoid exposed service:  Avoid exposing an another service throug websocket without proper authentication/authorization in place.  Websocket as of now do not support authenticaion/authorization.  The http connection need to perform this work before handling stepping up connection to websocket.

Do not believe Origin headers:  Websocket connection have origin connection detail in the header info.  Do not make serious decisions based on this origin information.  A malicious client can pretend as a standard web browser

How it works

Ajax, a brief overview

Browser works as a Request-Response client.  Typically, browser client requests resources (such as html) from a server.  Server (a.k.a Web servers) return resources. Case closed (i.e., connection closed!).  As more and more businesses started moving their applications to browsers, new scenarios (use-cases) desired responsive web application.  We all love google suggestions when we type in their search box, right.  That’s good usage of AJAX pattern.  Here is a brief explanation of AJAX:

Fig 4: Simple explanation of how AJAX works

As in above picture, AJAX calls are made by browser behind the scenes, i.e., part of webpage is updated without reloading the entire page.  Typically AJAX messages are small.  AJAX calls use a special type of request called XMLHttpRequest.  When response received from server, this information is presented to user without refreshing the entire page.  For user, it appears as if the web page responds instantaneously without a call to server.  AJAX is good for user created events.  Here in the above explanation, user typed into a web page (such as google search box) to initiate AJAX call to server.  Consider another use-case, say, a live data grid use-case: AJAX is not a suitable solution.  Because in live data feed, server creates the event.  That is, server gets to know when the new data arrives. Clients making timer-based AJAX calls unnecessarily increases network traffic, let alone, dries up all server resources.  If the data passed is big, then, sever goes to a state where it cannot accept any new connections.

COMET, a brief overview

Timer-based AJAX calls clogs server with unnecessary calls.  To solve this, COMET (a.k.a Reverse AJAX) was actively used.  Reverse AJAX/COMET lets server push data instead of client polling for data in specific interval.  This is perfect for server initiated events use-case such as live-feed to browser.

Fig 5: Simple explanation of how COMET / Reverse AJAX works

As in above picture, Client calls to server are put on hold by server.  Hold is different from disconnect.  Holding a call means server keeps these open connections as pending.  When new data becomes available for update, server sends response through these pending connection.  This is much better than AJAX, but not perfect.  This suffers from inability to handle error reliably, restriction on number of connection, latency issues, etc.

HTML5 websocket, a low latency, reliable communication standard

In simple terms, the very same http connection between browser and your server is upgraded to TCP, a reliable protocol.  A nice, simple and clean approach.  This allows full duplex communication between browser and server.  This cuts all the middle-man (I mean extra layer) involved in the process.   HTML5 provides a nice low-latency, reliable tcp communication between browser and server.  More importantly, as this is a standard, the requirement is: it shall work in all browsers.  Every firm is working to embrace HTML5 today

Fig 6: Simple explanation of how html5 hand shake is established between client browser and server

How html5 websocket works

html5 websocket works by a client initiating a connection to server.  Typically, a web brower attempts to connect to a server exposing websocket interface.  Initial handshake is as simple as shown in below picture.  When we meet a person, we say, “How are you” and shake their hand.  While shaking hand, the opponent would say, Hello, nice meeting with you.  After this initial handshake, conversation starts.  Both parties can initiate new topic.  Similarly, after initial handshake the http protocol is upgraded to TCP.  Then, the communication becomes low latency, full duplex TCP as shown in below Fig 7

Fig7: html5 websocket handshake steps

For example, here is a sample from chrome browser.  Below is what a brower sends to handshake with a server of interest

GET /service HTTP/1.1
Host: localhost:8080
Connection: Upgrade
Pragma: no-cache
Cache-Control: no-cache
Upgrade: websocket
Origin: null
Sec-WebSocket-Version: 13
User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2272.101 Safari/537.36
Accept-Encoding: gzip, deflate, sdch
Accept-Language: en-US,en;q=0.8
Sec-WebSocket-Key: 198ol8E3A0P/DPNlSOq4XA==
Sec-WebSocket-Extensions: permessage-deflate; client_max_window_bits

// copied from chrome browser request

And the below is what a server reply:

HTTP/1.1 101 Switching Protocols\r\n
Upgrade: websocket\r\nConnection: Upgrade\r\n
Sec-WebSocket-Accept: n72SJe/JO84rHrLBkigtRDc+6QA=\r\n\r\n

Intentionally, I have this ‘\r\n’ readable in the message.  Note, the last line ending with two of them.  They are important for the browser to accept server’s handshake.  After this handshake, server and client are ready for full duplex, low latency communication

Explanation of code

Let’s look at how basic html5 connection is established.  html5Stream.StartServer is a simple method that accepts client connection and processes it asynchronously.  This method also spawns a separate thread that processes all client publishing work.  We will take a look at this method later.  For now, this method is not html5 significant.  It’s plain simple socket accept, nothing more.  Html5Stream.AcceptConnection performs the necessary html5 specific handshake with client.

<code>private void</code> OnAcceptConnection(object param)


    ------------ // code skipped for brevity

   int <code>rcvBytes </code>= client.Receive(buffer);
   headerResponse = getStrFromByte(buffer, rcvBytes);
   if (client != null)
       prepareClientResponse(headerResponse, ref sendBuffer);
       rcvBytes = client.Receive(buffer);
       string dataFromClient = decodeData(buffer);



This function performs the important handshake necessary for upgrading http to tcp connection.
Steps to establish html5 handshake:

  • Create a socket and bind to a specific port.
  • Once the connection is established, browser sends in a request as below. Older browser may send little different data.  Some different browser may send slightly different data.  So if you are writing ‘pro’ grade code, you need to address all of these cases.  Please refer Reference (2)
  • Based on handshake details, answer key needs to be generated.  Take a look at html5Stream.prepareClientResponse method.  This method prepares a handshake response from server to connecting client.  Handshake response looks as below.  If you read, surprisingly, it’s quite simple.  That’s it, handshake complete.  Now on, client and server can enjoy low latent, reliable, full duplex communication between each other.

Now, you are probably wondering, how simple this is, correct?
Everything should be made as simple as possible, but not simpler.
— Albert Einstein

So to avoid being simpler the messages are not raw.  Communication messages between server and client employ data framing (remember, Einstein said, not simpler).  Just kidding, as in reference (3), Data framing is employed to avoid confusing network intermediaries and for security reasons.  Even for SSL/TLS this masking is performed.  Fortunately, It’s not that difficult to implement.  I implemented the part that we need for our grid to work (brevity).  Take a look at methods html5Stream.encodeData and html5Stream.decodeData.

Now that handshake and encoding is complete, we are ready to stream data.  html5Stream.publishAllmethod does this work.  It waits on dataReceived event.  This event is set when user calls html5Stream.setStreamData.  i.e., when data is ready to be published to client.  The available data is encoded and sent to client.  Each client in the list is enumerated and sent with new update.

private void publishAll(object dummy)
  byte[] byteToSend = null;

  ----------- // lines skipped for simplicity

    bool dataReceived = eventDataAvailable.WaitOne(1000);
    if (dataReceived)

      encodeData(getStreamData(), ref byteToSend);
      lock (lockObj)
        HashSet<Socket>.Enumerator enumSockets = socketsToUpdate.GetEnumerator();
        SocketAsyncEventArgs sendData = new SocketAsyncEventArgs();
        sendData.SetBuffer(byteToSend, 0, byteToSend.Length);
        while (enumSockets.MoveNext())
    ---------- // lines skipped for brevity



Points of Interest

=>  This implementation is aimed for learning.  So code brevity and simplicity is exercised all across the code for easy understanding rather than performance.  An enterprise grade source will use a totally different data structure, interface, error/exception handling etc.  In case if you are interested in professional/ enterprise grade library get in touch with me
=>  A free file streamer is added with sample project.  Open the CSV file, add new columns, rows.  Also try adding new formula.  It’s quite interesting to see the dynamic update getting generated to browser screen on other machine.  Adding new computed columns has never been easier than this.

Posted in ASP.NET MVC, C#, Knowledge, Problem solving, Uncategorized | Leave a comment

100 câu hỏi thường gặp khi phỏng vấn Tester – QC – QA




#1. Anh/chị có thể tự giới thiệu về bản thân?

#2. Giới thiệu về dự án gần nhất bạn làm?

#3. Vai trò và trách nhiệm của bạn trong dự án?

#4. Cho biết những khó khăn anh/chị gặp phải trong quá trình kiểm thử?

#5. Và cách bạn vượt qua những khó khăn đó?

#6. Bạn hãy giới thiệu về một con bug thú vị mà bạn tìm được?

#7. Vì sao bạn chọn kiểm thử?

#8. Kiến thức kiểm thử bạn có được từ đâu?


#9. Bạn sẽ làm gì khi developer nói là không thể tái tạo được lỗi của bạn?

#10. Bạn đã bao giờ làm việc với developer khó tính và cách bạn xử lý với anh ấy / cô ấy ra sao?

#11. Làm thế nào bạn đóng góp giá trị cho các công ty bạn làm việc? Bạn có thể cho ví dụ?

#12. Mô tả về một người sếp lí tưởng của bạn?

#13. Bạn sẽ làm gì nếu có xung đột xảy ra giữa bạn và các thành viên trong nhóm?

#14. Bạn có hay đóng góp ý tưởng để cải thiện chất lượng dự án, qui trình test? Hãy cho một ví dụ về một cải tiến bạn đưa ra trong dự án của bạn?

#15. Nếu bạn chạy test case và không tìm thấy lỗi nào, điều đó có nghĩa là gì?

#16. Theo bạn thì tester tìm được nhiều lỗi nhất trong dự án có phải là một tester giỏi? Vì sao?

#17. Những đức tính cần có của một kỹ sư kiểm thử giỏi?

#18. Giả sử sếp bạn muốn bạn hoàn tất việc kiểm thử vào cuối ngày trong khi bạn còn rất nhiều trường hợp kiểm thử cần phải thực thi, bạn sẽ xử lý như thế nào?

#19. Bạn làm gì khi developer từ chối bug của bạn?

#20. Tại sao chúng tôi nên tuyển dụng bạn cho công việc này?

#21. Làm thế nào để bạn cải thiện kỹ năng và trau dồi kiến thức kiểm thử?

#22. Kiểm thử có rất nhiều thử thách. Bạn làm gì để giúp bạn luôn tiến liên phía trước?

#23. Bạn hãy cho biết cuốn sách về kiểm thử mà bạn yêu thích?

#24. Bạn hãy giới thiệu một vài tên tuổi lớn trong giới kiểm thử phần mềm?


#25. Lợi ích chính của kiểm thử sớm trong chu kỳ phát triển phần mềm là gì?

#26. Vì sao lỗi càng phát hiện muộn thì chi phí sửa lỗi càng cao?

#27. Kiểm thử hệ thống là gì?

#28. Vì sao chúng ta nên tiến hành kiểm thử tự động cho một bộ test?

#29. Theo bạn thì kiểm thử là gì?

#30. Một báo cáo công việc kiểm thử (test report) gồm những gì? Và ích lợi của bảng báo cáo này?

#31. Lỗi thường xuất hiện ở giai đoạn nào là chủ yếu trong chu kỳ phát triển phần mềm?

#32. Kiểm thử ngẫu nhiên (random testing) là gì? Khi nào thì ta sử dụng nó?

#33. Các best practice để đảm bảo chất lượng phần mềm là gì?

#34. Làm thế nào để bạn biết hoạt động kiểm thử của bạn có hiệu quả hay không?

#35. Kiểm thử chịu tải (Load testing) là gì?

#36. Mục đích của báo cáo lỗi là gì?

#37.  Những yếu tố nào quyết định độ ưu tiên khi kiểm thử?

#38. Các thành phần cơ bản của một báo cáo lỗi là gì.

#39. Kiểm thử nên bắt đầu ở giai đoạn nào trong chu kỳ phát triển phần mềm?

#40. Kiểm thử kiểu khám phá (exploratory testing) là gì?

#41. Những loại test nào là quan trọng đối với kiểm thử trên web?

#42. Làm thế nào bạn có thể giảm thiểu các rủi ro trong dự án?

#43. Khi nào thì ngừng kiểm thử?

#44. Lợi ích của Kiểm thử độc lập (independent testing) là gì?

#45. Những loại test nào không nên kiểm thử tự động?

#46. Bạn sẽ làm gì để cải thiện qui trình kiểm thử của công ty bạn?

#47. Sự khác nhau giữa độ ưu tiên và độ nghiêm trọng trong lỗi?

#48. Kỹ thuật phân vùng tương đương (equivalence partitioning) là gì?

#49. Khi xảy ra xung đột giữa bạn và các thành viên trong nhóm, bạn xử lý như thế nào?

#50. Sự khác biệt giữa tái Kiểm thử (re-testing) và Kiểm thử hồi quy là gì?

#51. Các phương pháp khác nhau trong mô hình phát triển Agile là gì?

#52. Trong một dự án kiểm thử thì những hoạt động kiểm thử nào có thể kiểm thử tự động được?

#53. Những thách thức trong hoạt động kiểm thử?

#54. Trình tự cần làm khi bạn tìm thấy lỗi là gì?

#55. Hãy xem xét các kỹ thuật sau đây và cho biết kỹ thuật nào là kỹ thuật kiểm thử tĩnh và kỹ thuật nào là kỹ thuật kiểm thử động?

#56. Sự khác biệt giữa các Kiểm thử tĩnh và kiểm thử động là gì?

#57. Ma trận theo dấu yêu cầu (Requirement Traceabilty Matrix) là gì?

#58. Kiểm thử bao nhiêu được cho là “đủ”?

#59. Độ bao phủ trong kiểm thử là gì?

#60. Kiểm thử hồi quy (regression testing) là gì?

#61. Các điểm thuận lợi và bất lợi của việc kiểm thử tự động trên GUI?

#62. DRE (Defect Removal Efficiency) là gì?

#63. Kiểm thử tự động có thay thế được kiểm thử thủ công?

#64. Kiểm thử hộp đen là gì? Các kỹ thuật Kiểm thử hộp đen ?

#65. Những yếu tố nào cần cân nhắc khi lựa chọn các công cụ kiểm thử tự động?

#66. Các dự án kiểm thử bị thất bại thường do đâu?

#67. Mô hình V-Model là gì?

#68. Những tiêu chí nào cần cân nhắc chuẩn bị kiểm thử tự động một bộ test?

#69. Trình tự bạn tiến hành một hoạt động kiểm thử?

#70. Trong khi theo dõi dự án, bạn cần quan tâm đến những yếu tố nào?

#71. Trình tự bạn thực thi 1 bộ test case là như thế nào?

#72. Bạn dựa trên những cơ sở nào để có thể ước lượng cho một dự án?

#73. Hãy nêu một trong những lý do chính tại sao developer không nên là người Kiểm thử công việc của mình?

#74. Mục đích của Kiểm thử hộp trắng?

#75. Các mức độ kiểm thử khác nhau là gì?

#76. Như thế nào là một trường hợp kiểm thử tốt?

#77. Trong quá trình kiểm thử, tester tìm thấy bug và báo cho developer nhưng developer không đồng ý đó là bug. Tester nên làm gì tiếp theo?

#78. Xác minh (verification) và xác nhận (validation) là gì?

#79. Kế hoạch kiểm thử (test plan) bao gồm những thông tin gì?

#80. Nếu bạn nhận được build từ developer và có quá nhiều lỗi nghiêm trọng, bạn sẽ làm gì?

#81. Dựa vào đâu để giúp chúng ta biết được chất lượng của việc thực thi Kiểm thử là gì?

#82. Kiểm thử dựa trên rủi ro (Risk-based testing) là gì?

#83. Khi nào thì chúng ta nên dừng công việc kiểm thử?

#84. Kiểm thử bảo mật là gì?

#85. Bạn sẽ làm gì nếu bug bị rò rỉ đến người dùng cuối?

#86. Đâu là gì lý do quan trọng nhất để áp dụng kiểm thử dựa trên rủi ro (risk-based testing)?

#87. Các qui trình kiểm thử bao gồm những hoạt động gì?

#88. Kỹ thuật Kiểm thử dựa trên kinh nghiệm là gì?

#89. Tại sao phần mềm có lỗi?

#90. Anh/chị hãy cho biết những loại kiểm thử nào thường được sử dụng?

#91. Tại sao phân tích giá trị biên thường là những trường hợp kiểm thử tốt?

#92. Những sai lầm phổ biến có thể gây ảnh hưởng đến dự án là gì?

#93. 7 nguyên tắc kiểm thử?

#94. Kiểm thử thành phần (component testing) là gì?

#95. Bạn test một màn hình login trên web như thế nào?

#96. Làm thế nào để thực hiện kiểm thử nếu không có tài liệu đặc tả?

#97. Các thành phần cơ bản của một báo cáo lỗi là gì?


#98. Mục tiêu của bạn trong 5 năm tới là gì?

#99. Bạn thích làm việc trong môi trường làm việc như thế nào?

#100. Bạn có câu hỏi gì cho chúng tôi?

Posted in Jobs, Knowledge, Problem solving, Software Job, Uncategorized | Leave a comment

Building A Large Scalable System


Most applications are developed using three tier architecture which consists of presentation layer, business logic layer, and data access layer. The presentation layer contains aspx, html or jsp page and business layer contains services like WCF, Web API or web services and data layer contains code to communicate to databases where the actual data resides.

This is how the application architecture looks.

How scalability comes into picture

The web application is up and running, users are happy and business getting revenue —  everything goes fine when the business is very small, now gradually the users’ level increases to the applications and the traffic to the applications became huge and web applications became very slow. When the application is very slow no user is interested in using the applications and the business loses revenue and reputation, and losing the business is losing everything.

This is where the scalability comes into the  picturel how to extend the system to serve significantly high volumes of users. Scalability is not the same as performance scalability is not code issue, it is how we need  to extend the application in multiple servers, multiple database, multiple location to server millions users.

When designing any system there are some key considerations which developers and architects should keep in mind. These are:

  • Scalability – The number of users system/ session/transaction/ operations it can support.
  • Performance – The system should use optimal utilization of resources like CPU, Thread, memory.
  • Responsiveness – The time taken per operation should be less. Example: User should not wait a long time to get the information from server.  Example -If we are booking tickets and it is very slow in doing a transactions we think  what a bad application.
  • Availability – The system should be the available at any given point in time. If not fully, it should be partially available ensuring that end users think the system is available.
  • Downtime Impact – The impact of the downtime of a server/service/resource – number of users, type of impact should be very minimal.
  • Cost – Cost for the system should be within the budget. More cost of the system does not give profit to the organization.
  • Maintenance Effort- System must have the very litlle maintenance effort. For example if the system is developed now it must have the features to extend or enhance the feature very easily.

There are some key considerations when designing scalable systems, these considerations are:

  • Vertical scaling
  • Horizontal Scaling
  • Horizontal Partitioning
  • Vertical Partitioning
  • Load balancing
  • Master-Slave setup
  • Distributed Caching
  • Use NOSQL
  • Incremental model development

Vertical scaling

Vertical scaling means adding hardware to the system i.e., RAM, CPU, processors into the existing machine to increase the processing time in the server. In a virtual machine set up it can be configured virtually instead of adding real physical machines. When increasing the hardware resources we should not change the number of nodes. This is referred to as “Scaling up” the Server.

As an advantage it is simple to implement but as a disadvantage how much hardware we can add to it has a finite limit. Hardware does not scale linearly (diminishing returns for each incremental unit). Adding hardware requires downtime.

Horizontal Scaling

The horizontal scaling means adding more web servers through Load Balancing to the system similar to earlier one. Now multiple machines work together to give quick response and availability of any system including database. After adding multiple machines now we have multiple machines to distribute the work load from time to time.

Each machine works as a different node which is identical in nature here and sometimes multiple nodes are treated as a cluster of servers. This is referred to as “Scaling Out” of the web server.

As an advantage we have multiple servers which can distribute the user traffic, now the synchronization of codes, session management, caching data should be in a proper way to the user.

Horizontal Partitioning

Horizontal partitioning partitions or segments rows into multiple tables with the same columns.

E.g. of horizontal partitioning :- customers with city ABC codes are stored in Customers ABC, while customers with customer city XYZ are stored in Customers XYZ. Here the two partitioning tables can be ABC and XYZ .

This way Database partitioning by value from the beginning into your design is a good approach.

Vertical Partitioning

The term Vertical Partitioning denotes increasing the number of nodes by distributing the tasks/functions. Each node (or cluster) performs separate tasks  different from the other. Vertical Partitioning can be performed at various layers,this may be at application /server / Data / Hardware levels. These are Task-based specialization, reduces context switching and we can do the optimization and tuning as much possible. Instead of putting everything into one box put  it into different boxes. In Database tables consider we have customer, orders, customer order, and order status in one DB, we can move some of these into another DB.

Load balancing

Load balancing is the process of serving a user his request towards one server that is part of a server farm. In this way the user load is distributed amongst several servers.

There are two kinds of load balancers.  (Hardware Load balancers and Software Load balancers). Hardware Load balancers are faster whereas Software Load balancers are more customizable.

We should keep in mind when coding for a system that has load balancing.

Do not program depending on a cache or session data that is written to the server or local file system of the server, do not rely on the file system at all.

Let us assume we have different servers and user1 is hitting the request and this request is served by server1 and  server1 goes down, now the load balancer will redirect the request to server2 but how the session data in server1 will be passed to server2. This issue is called a sticky session.

For proper session management we should use a centralized session management where multiple servers can read session data .SQL server as session state mode is used  for most of .NET based web applications in a large systems.

Scaling from a single DB server to a Master-Slave setup

Isolating the Database based on the purpose gives better performance in scalable systems.

As an example earlier we had one database which was used for getting different SSRS reports and crystal reports, same was used for different SQL Jobs, Windows Services, Email Message communication, all transaction data etc. We moved to different Master-slave databases for better results. All transaction data as written are sent to a single master who replicates the data to multiple slave nodes. Almost all the RDBMS MySQL, MSSQL and Oracle support native replication.


Effective caching is a key to performance in any distributed systems. To make a highly scalable system the caching should be a distributed caching which may span multiple servers. The cache data may grow from time to time but there should be an effective way to handle it.

NCache/ Velocity/AppFabric are some of the good distributed caching tools options in a .NET large scale application. The cache information is stored as a cluster of nodes and all have the feature of replicating and locating information for faster access.


NoSQL databases give advantages with scalability, availability, and zero downtime. They store the data in multiple file nodes which can be easily accessed and replicated as needed.

Some of the NoSQL tools are Cassandra, MongoDB, and CouchDB etc.

Incremental model development

Inspect the issues, Change as needed and adopt this is the key for building of a scalable system.

Write automated builds using Jenkin, Team city or TFS to make a build automated with 100% test coverages. Proper testing to  changes in Database, codes, configurations in the test or UAT environment is needed before pushing the code into production. From day one of development designer/architects/developers should think of developing loosely coupled modules. Choosing a proper platform and language considerations are also a big factor for building large systems.

As Applications need be able to scale in distributed environments with a number of servers these incremental model development steps help to a large extent.

We cannot bring scalable systems in a single day as “Rome was not build in a day,” it is a collaboration and  great team work among developers, architects, QA, infrastructure, and dev ops to build a highly scalable systems.

Posted in Uncategorized | Leave a comment