AWS Certified Solutions Architect

AWS Certified Solutions Architect – Associate Level dành cho cá nhân đang hoặc muốn làm việc như 1 Solution Architect. Chứng chỉ này xác nhận khả năng của thí sinh để:

  • Xác định và thu thập các yêu cầu để đề ra giải pháp dựa trên hiểu biết và các kỹ năng tốt nhất̀ về kiến trúc.
  • Có khả năng cung cấp các hướng dẫn tốt nhất về kiến trúc cho người phát triển và người quản trị hệ thống trong suốt vòng đời của dự án.

Các kiến thức và kỹ năng cần thiết ở cấp độ này bao gồm các lĩnh vực dưới đây. Mức độ kiến thức được định nghĩa phải có các thành phần chính sau đây:

Kiến thức về AWS

  • Kinh nghiệm thực tiễn với dịch vụ compute, networking, storage, và database AWS.
  • Kinh nghiệm chuyên môn về kiến trúc các hệ thống phân tán quy mô lớn.
  • Hiểu biết về các khái niệm Elasticity và Scalability.
  • Hiểu biết về các công nghệ mạng có liên quan đến AWS.
  • Hiểu biết tốt về tất cả các tính năng và công cụ bảo mật mà AWS cung cấp và mối quan hệ với các dịch vụ truyền thống.
  • Hiệu biết rất vững về cách tương tác với AWS (AWS SDK, AWS API, Command Line Interface, AWS CloudFormation).
  • Kinh nghiệm thực tiễn với các dịch vụ triển khai và quản lý của AWS.

Kiến thức về IT

  • Hiểu biết rất tốt về kiến trúc nhiều tầng (multi-tier): web servers (Apache, Nginx, IIS), caching, application servers, load balancers.
  • RDBMS (MySQL, Oracle, SQL Server), NoSQL
  • Kiến thức về hàng đợi thông điệp (message queuing) và Enterprise Service Bus (EBS).
  • Quen thuộc với loose coupling và stateless systems.
  • Hiểu biết về các mô hình nhất quán (consistency model) khác nhau trong các hệ thống phân tán.
  • Có kinh nghiệm với CDN và các khái niệm về hiệu suất (performance).
  • Kinh nghiệm về mạng với route tables, access control lists, firewalls, NAT, HTTP, DNS, IP và mạng OSI.
  • Kiến thức về RESTful Web Service, XML, JSON.
  • Quen thuộc với vòng đời phát triển phần mềm.
  • Kinh nghiệm làm việc với bảo mật thông tin và ứng dụng bao gồm mã hóa với khóa công khai, SSH, access credentials, và X.509 certificates.

Các khóa đào tạo hoặc các phương pháp tương đương khác sẽ hỗ trợ nhiều cho việc chuẩn bị kỳ thi:

  • Architecting on AWS (aws.amazon.com/training/architect)
  • Kiến thức hoặc đào tạo chuyên sâu về ít nhất 1 ngôn ngữ lập trình cấp cao.
  • AWS Cloud Computing Whitepapers (aws.amazon.com/whitepapers)
    • Tổng quan về Amazon Web Services
    • Tổng quan về Security Processes
    • AWS Risk & Compliance Whitepaper
    • Storage Options in the Cloud
    • Architecting for the AWS Cloud: Best Practices
  • Kinh nghiệm triển khai các hệ thống lai (hybrid) với on-premise và các thành phần AWS.
  • Dùng website của AWS Architecture Center (aws.amazon.com/architecture)

Chú y

    ́: Bảng kế hoạch này bao gồm các phần nội dung quan trọng, mục tiêu thử nghiệm, và các nội dung ví dụ. Các chủ đề và khái niệm ví dụ chỉ nhằm để làm rõ các mục tiêu thử nghiệm; chúng không nên được hiểu như là 1 danh sách toàn diện của tất cả các nội dung trong kỳ thi này.
    Bảng dưới đây liệt kê tỷ lệ của từng lĩnh vực kiến thức trong kỳ thi.
Domain % of Examination
1.0 Designing highly available, cost effective, fault tolerant, scalable systems 60%
2.0 Implementation/Deployment 10%
3.0 Data Security 20%
4.0 Troubleshooting 10%
TOTAL 100%

Các giới hạn trả lời

Thí sinh lựa chọn từ bốn (4) hoặc nhiều hơn các tùy chọn trả lời mà cho là tốt nhất để hoàn thành câu hỏi. Bỏ qua hoặc trả lời sai xem như là chưa hoàn thành kiến thức hoặc kỹ năng cần thiết.

Dạng thức thi được sử dùng là:

  • Multiple-choice: thí sinh chọn 1 lựa chọn tốt nhất để trả lời cho câu hỏi hoặc câu khẳng định. Các tùy chọn có thể được nhúng vào hình đồ họa để thí sinh có thể “points and clicks”.
  • Multiple-response: thí sinh chọn nhiều hơn 1 lựa chọn để trả lời cho cẩu hỏi hoặc câu khẳng định.
  • Sample Directions: đọc câu hỏi hoặc câu khẳng định và từ các tùy chọn trả lời, chỉ chọn 1 đáp án đại diện cho câu trả lời tốt nhất.

Các giới hạn nội dung

1.     Domain 1.0: Designing highly available, cost efficient, fault tolerant, scalable systems

1.1   Xác định và nhận xét kiến trúc điện toán đám mây, như các thành phần cơ bản và các thiết kế hiệu quả.

Nội dung bao gồm:

  • Cách thiết kế các dịch vụ cloud
  • Lập kế hoạch và thiết kế
  • Giám sát
  • Quen thuộc với:
  • Best practices
  • Phát triển Client Specifications gồm pricing/cost (e.g. on Demand vs. Reserved vs. Spot, RTO and RPO DR Design)
  • Các quyết định kiến trúc (high availability vs. cost, Amazon Relational Databas Service (RDS) vs. cài đặt CSDL của riêng bạn trên Amazon Elastic Compute Cloud (EC2)).
  • Tích hợp với các môi trường phát triển hiện có và xây dựng kiến trúc có khả năng mở rộng.
  • Elasticity và scalability.

2.     Domain 2.0: Implementation/Deployment

2.1   Xác định các kỹ thuật và phương pháp thích hợp dùng Amazon EC2, Amazon S3, Elastic Beanstalk, CloudFormation, Amazon Virtual Private Cloud (VPC), và AWS Identity and Access Management (IAM) để viết mã và cài đặt 1 giải pháp cloud.

Nội dung bao gồm:

  • Cấu hình Amazon Machine Image (AMI)
  • Vận hành và mở rộng dịch vụ quản lý trong private cloud
  • Cấu hình hợp lý trong private và public cloud
  • Khởi chạy các instances trong nhiều geographical regions.

3.     Domain 3.0: Data Security

3.1   Nhận diện và cài đặt các thủ tục bảo vệ cho việc triển khai và duy trì cloud được tối ưu

Nội dung bao gồm:

  • Cloud Security Best Practices
    • Cách xây dựng và dùng threat model
    • Cách xây dựng và dùng data flow diagram để quản lý rủi ro (risk management)
      • Use cases
      • Abuse Cases (Negative use cases)
  • Security Architecture with AWS
    • Shared Security Responsibility Model
    • AWS Platform Compliance
    • AWS security attributes (customer workloads down to physical layer)
    • Security Services
    • AWS Identity and Access Management (IAM)
    • Amazon Virtual Private Cloud (VPC)
    • CIA và AAA models, ingress vs. egress filtering, and which AWS services and features fit
    • “Core” Amazon EC2 and S3 security feature sets
    • Incorporating common conventional security products (Firewall, IDS:HIDS/NIDS, SIEM, VPN)
    • Design Pattern
    • DDOS mitigation
    • Encryption solutions
    • Complex access controls (building sophisticated security groups, ACLs, etc.)
    • Amazon CloudWatch for the security architect

3.2   Nhận diện các kỹ thuật khắc phục thảm họa nguy hiểm và cách cài đặt chúng

Nội dung bao gồm:

  • Disaster Recovery
    • Recovery time objective
    • Recovery point objective
    • Amazon Elastic Block Store
  • AWS Import/Export
  • AWS Storage Gateway
  • Amazon Route53
  • Testing the recovered data

4.     Domain 4.0: Troubleshooting

Nội dung bao gồm:

  • Xử lý sự cố về các thông tin và câu hỏi nói chung

http://awslagi.com/noi-dung-thi-aws/

Posted in Integration, Java, Software architecture | Leave a comment

Solution architecture: Dev-Test deployment for testing microservice solutions

This architecture represents how to configure your infrastructure for development and testing of a microservices-based system.

This solution is built on the Azure managed services: Visual Studio Team Services, Service Fabric and SQL Database. These services run in a high-availability environment, patched and supported, allowing you to focus on your solution instead of the environment they run in.

Dev-Test deployment for testing microservice solutionsA diagram showing the solution architecture of a dev-test deployment for testing microservice solutions, built on the Azure managed services Visual Studio Team Services, Service Fabric, and SQL Database.ARM Infrastructure andService Fabric Code DeploymentS1S2S3S1S2S3S1S2S1S2S1S2S1S2S3Visual StudioTeam ServicesBuild andRelease AgentDevelopment Resource GroupQA Resource GroupProd Resource GroupDevelopmentDatabaseQADatabaseProductionDatabaseDevelopment Host 1QA Host 1QA Host 2QA Host 2Production Host 1

Implementation guidance

Products Documentation

Visual Studio Team Services

Visual Studio Team Services manage the development process.

Microsoft Release Management

The Microsoft Release Management build and release agents deploy the Azure Resource Manager template and associated code to the various environments.

Visual Studio Team Services resource groups

Visual Studio Team Services resource groups are used to define all the services required to deploy the solution into a dev-test or production environment.

Service Fabric

Service Fabric orchestrates all of the microservices used in the solution. In development, code is deployed directly from the development tools, while in test and production environments the code is deployed through the build and release agent using Resource Manager templates.

SQL Database

Azure SQL Database maintains data for the website. Copies are deployed in the dev, test, and production environments.
Posted in ASP.NET MVC, C#, Education and Training, Software architecture | Leave a comment

STARTING AND STOPPING EC2 INSTANCES USING A LAMBDA

From http://blog.conygre.com/2016/11/18/starting-and-stopping-ec2-instances-using-a-lambda-and-cut-your-aws-bill-in-half/

CUTTING YOUR AWS EC2 BILL WITH LAMBDA FUNCTIONS

When running a large training program for an investment bank, we needed over 30 EC2 instances, but only between certain hours of the day. This simple Lambda Function, cut our AWS bill by around 65% on the normal cost of running those instances all day every day.

As a CTO and Cofounder of a food delivery business, I was able to cut our AWS bill substantially by running our servers in the evening when deliveries were taking place. Again, a simple Lambda function could cut the bill as we would no longer be running them all the time.

How many of your servers are really needed all the time? If you want to shave your AWS bill, then Lambda’s make it easy to schedule the starting and stopping of your instances.

HOW TO CREATE THE LAMBDA FUNCTION

PART 1 CREATE THE IAM ROLE WITH PERMISSION TO ACCESS EC2

Any Lambda expression will run with a set of permissions. Those permissions are configured as an IAM role. If you don’t have an IAM role already with permission to access EC2 you will need to create one first.

  1. In the AWS Administration Console, visit the IAM service.
  2. In the left pane of the IAM service, click Roles.
  3. Then click, Create New Role.
  4. At the Set Role Name dialog, enter a name, something like Ec2AccessRole.
  5. At the Select Role Type dialog, click Select by the EC2 Role option.
  6. You are now presented with a list of policies. Locate and select the EC2FullAccess and click Next Step.
  7. At the Review screen, click Create Role.

PART 2 CREATE THE LAMBDA FUNCTION

  1. In the AWS Administration Console, visit the Lambda service.
  2. Click the Create new Lambda Function button.
  3. At the Select Blueprint dialog, select the first option Blank Function.
  4. At the Configure Triggers dialog, click the grey checked box and at the drop down, select CloudWatch Events Schedule.
  5. In the Configure Triggers form, enter a suitable name for your trigger, something like: StartServersAt8AM
  6. In the Configure Triggers form, enter a suitable description, something like: Start instances at 8am.
  7. In the Configure Triggers form, enter a Schedule Expression. These are in the form of Cron, which is a scheduling command found on Unix boxes. It has a standard format for times and dates which is used by AWS. So for example,  to start at 8AM Monday to Friday the expression would be: cron(00 08 ? * MON-FRI *). An excellent utility to help you can be found here: http://www.cronmaker.com/. This simple Web site will give you the required cron expression for the time you require.  IMPORTANT: Note that the time must be in UTC!
  8. Check the Enable trigger checkbox and click Next.

PART 3 CREATE THE CODE FOR THE FUNCTION

Now you will need to set up the actual Function itself to start the servers. This will be written in Python.

  1. At the Configure Function dialog, enter a name, something like startMyServers.
  2. At the Configure Function dialog, enter a description, something like Start the servers.
  3. At the Configure Function dialog, set the Runtime to Python.
  4. In the code box below, enter the following code. In our example, we are setting it to start servers with a specific Tag on them. You could change this to be anything you like. Some way of identifying the servers you wish to start and stop.

import boto3
import logging

ec2 = boto3.resource('ec2')

def lambda_handler(event, context):

    filters = [{
            'Name': 'tag:Role', // you might change this tag name. Our servers had a tag called Role
            'Values': ['MyRoleTagValue'] // this was the value of the tag called Role. You can change this also. Just make sure you add the Tag called Role to your own instances
        },
        {
            'Name': 'instance-state-name', 
            'Values': ['stopped']
        }
    ]
    
    instances = ec2.instances.filter(Filters=filters)
    
    stoppedInstances = [instance.id for instance in instances]
    
    
    if len(stoppedInstances) > 0:
        startingUp = ec2.instances.filter(InstanceIds=stoppedInstances).start()
    
  1. In the Lambda function handler and role section, select Choose an Existing Role.
  2. In the drop down that appears, select the role created in Part 1. We suggested the name of Ec2AccessRole.
  3. The remaining fields can be left as they are. Click Next.
  4. At the Review dialog, click Create Function.

That’s it! You’re done. To create one that stops the servers, the process is pretty much the same. Create another Lambda, but just change the code slightly to check for started instances, and then call stop() on them instead of start. A simple example of the code is below.


import boto3
ec2 = boto3.resource('ec2')

def lambda_handler(event, context):
    filters = [{
            'Name': 'tag:MyTag',
            'Values': ['MyTagValue']
        },
        {
            'Name': 'instance-state-name', 
            'Values': ['running']
        }
    ]
    
    instances = ec2.instances.filter(Filters=filters)

    runningInstances = [instance.id for instance in instances]
    
    if len(runningInstances) > 0:
        shuttingDown = ec2.instances.filter(InstanceIds=runningInstances).stop()

 

Posted in Business Model, Knowledge, Programming, Software architecture | Leave a comment

Running Serverless ASP.NET Core Web APIs with Amazon Lambda

https://aws.amazon.com/blogs/developer/running-serverless-asp-net-core-web-apis-with-amazon-lambda/

One of the coolest things we demoed at our recent AWS re:Invent talk about .NET Core support for AWS Lambda was how to run an ASP.NET Core Web API with Lambda. We did this with the NuGet package Amazon.Lambda.AspNetCoreServer (which is currently in preview) and Amazon API Gateway. Today we’ve released a new AWS Serverless blueprint that you’ll see in Visual Studio or with our Yeoman generator that makes it easy to set up an ASP.NET Core Web API project as a Lambda project.

Blueprint Picker

How Does It Work?

Depending on your platform, a typically deployed ASP.NET Core application is fronted by either IIS or NGINX, which forwards requests to the ASP.NET Core web server named Kestrel. Kestrel marshals the request into the ASP.NET Core hosting framework.

Normal Flow

When running an ASP.NET Core application as an AWS Serverless application, IIS is replaced with API Gateway and Kestrel is replaced with a Lambda function contained in the Amazon.Lambda.AspNetCoreServer package which marshals the request into the ASP.NET Core hosting framework.

Serverless Flow

The Blueprint

The blueprint creates a project that’s very similar to the one you would get if you selected the .NET Core ASP.NET Core Web Application and chose the Web API template. The key difference is instead of having a Program.cs file that contains a Main function bootstrapping the ASP.NET Core framework, the blueprint has LambdaEntryPoint.cs that bootstraps the ASP.NET Core framework.

C#

public class LambdaEntryPoint : Amazon.Lambda.AspNetCoreServer.APIGatewayProxyFunction
{
    protected override void Init(IWebHostBuilder builder)
    {
        builder
            .UseContentRoot(Directory.GetCurrentDirectory())
            .UseStartup()
            .UseApiGateway();
    }
}

The actual Lambda function comes from the base class. The function handler for the Lambda function is set in the AWS CloudFormation template named serverless.template, which will be in the format <assembly-name>::<namespace>.LambdaEntryPoint::FunctionHandlerAsync.

The blueprint also has LocalEntryPoint.cs that works in the same way as the original Program.cs file, enabling you to run and develop your application locally and then deploy it to Lambda.

The remainder of the project’s files are the usual ones you would find in an ASP.NET Core application. The blueprint contains two Web API controllers. The first is the example ValuesController, which is found in the starter ASP.NET Core Web API project. The other controller is S3ProxyController, which demonstrates how to use HTTP GET, PUT, and DELETE requests to a controller and uses the AWS SDK for .NET to make the calls to an Amazon S3 bucket. The name of the S3 bucket to use is obtained from the Configuration object, which means you can set the bucket in the appsettings.json file for local development.

JavaScript

{
  ...

  "AppS3Bucket": "ExampleBucketName"
}

The Configuration object is built by using environment variables.

C#

public Startup(IHostingEnvironment env)
{
    var builder = new ConfigurationBuilder()
        .SetBasePath(env.ContentRootPath)
        .AddJsonFile("appsettings.json", optional: false, reloadOnChange: true)
        .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true);

    builder.AddEnvironmentVariables();
    Configuration = builder.Build();
}

When the application is deployed, serverless.template is used to create the bucket and then pass the bucket’s name to the Lambda function as an environment variable.

JavaScript

...

"Get" : {
  "Type" : "AWS::Serverless::Function",
  "Properties": {
    "Handler": "AspNetCoreWithLambda::AspNetCoreWithLambda.LambdaEntryPoint::FunctionHandlerAsync",
    "Runtime": "dotnetcore1.0",
    "CodeUri": "",
    "MemorySize": 256,
    "Timeout": 30,
    "Role": null,
    "Policies": [ "AWSLambdaFullAccess" ],
    "Environment" : {
      "Variables" : {
        "AppS3Bucket" : { "Fn::If" : ["CreateS3Bucket", {"Ref":"Bucket"}, { "Ref" : "BucketName" } ] }
      }
    },
    "Events": {
      "PutResource": {
        "Type": "Api",
        "Properties": {
          "Path": "/{proxy+}",
          "Method": "ANY"
        }
      }
    }
  }
},

...

Logging

ASP.NET Core introduced a new logging framework. To help integrate with the logging framework, we’ve also released the NuGet package Amazon.Lambda.Logging.AspNetCore. This logging provider allows any code that uses the ILogger interface to record log messages to the associated Amazon CloudWatch log group for the Lambda function. When used outside of a Lambda function, the log messages are written to the console.

The blueprint enables the provider in Startup.cs, where other services are configured.

JavaScript

public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
    loggerFactory.AddLambdaLogger(Configuration.GetLambdaLoggerOptions());
    app.UseMvc();
}

This following snippet shows the call GetLambdaLoggerOptions from the Configuration object, which grabs the configuration of what messages to write to CloudWatch Logs. The appsettings.json file in the blueprint configures logging so that messages coming from classes under the Microsoft namespace are written if they’re informational level and above. For all other log messages, write debug level messages and above.

JavaScript

{
  "Lambda.Logging": {
    "LogLevel": {
      "Default": "Debug",
      "Microsoft": "Information"
    }
  },

  ...
}

For more information about this package, see the GitHub repository.

Deployment

Deploying the ASP.NET Core Web API works exactly as we showed you in the previous post about the AWS Serverless projects.

Deploy from Solution Explorer

Once deployed, a single Lambda function and an API Gateway REST API are configured to send all requests to the Lambda function. Then the Lambda function uses the ASP.NET Core framework to route to the correct Web API controller. You can test the deployment by accessing the two controllers using the AWS Serverless URL found in the CloudFormation stack view.

  • <aws-serverless-url>/api/values – Example controller
  • <aws-serverless-url>/api/s3proxy – S3 Proxy controller.

Feedback

We’re very excited about running ASP.NET Core applications on AWS Lambda. As you can imagine, the option of running the ASP.NET Core framework on top of Lambda opens lots of possibilities. The Amazon.Lambda.AspNetCoreServer package is in preview while we explore those possibilities. I highly encourage .NET developers to check out this blueprint and the Amazon.Lambda.AspNetCoreServer package and let us know on our GitHub repository or our new Gitter channel what you think and how we can continue to improve the library.

Posted in C#, Problem solving, Programming, Software architecture | Leave a comment

Croke Park: Sound and weather data monitoring within a smart stadium

https://microsoft.github.io/techcasestudies/iot/2016/10/28/CrokePark.html

Updated with a section on security and additional resources

Boasting a capacity for 82,300 people, Ireland’s Croke Park stadium is one of the largest stadiums in Europe. As the national home to the Gaelic games and headquarters of the Gaelic Athletic Association (GAA), it hosts numerous high-profile international sporting, cultural, and music events. And now, within this urban test-bed infrastructure, lies the perfect Internet of Things (IoT) microcosm: a true “smart stadium.”

The Croke Park Smart Stadium project is a collaboration between GAA, Dublin City University (DCU), Intel, and Microsoft to advance innovation around IoT. Intel has strategically positioned sensors and gateways throughout the stadium to enable a range of environmental monitoring, safety, and fan experience use cases. These edge gateways compute and communicate with the sensors, collecting enormous amounts of diverse types of data and storing them on the Microsoft Azure cloud platform.

For the Intel ‘IoT’ Group Technical Marketing Team, the “Croke Park Project is an engineers’ test-bed for deploying an E2E IoT system to enable solutions, but more importantly understand the practical realities of what it takes to go into a third-party uncontrolled environment and deploy a bolt-on IoT system that includes sensors, IT equipment, gateways, communications, and interfaces onto existing infrastructure.”

Researchers at DCU are using the Azure IoT Suite to analyze that data, in the process creating dashboards that provide stadium management with real, actionable insights. These insights have provided Croke Park with information and opportunities to improve audience and fan engagement, foster better relations with the local community, reduce the carbon footprint, and ensure a safe experience on some of their tourist attractions while driving efficiencies and cost-effective stadium management. For DCU researchers, the opportunity to gather data from a variety of sensors within the IoT framework in a live environment over long periods of time is an exciting and unique platform for advancing research in data analytics.

Laura Clifford, Commercial Development and Engagement, Research & Enterprise Hub, Dublin City University, led the effort to help interested companies learn about and participate in the project. “We’ve had more than 30 companies actively involved with us in understanding how they could potentially deploy their pre-commercial IoT technologies here at Croke Park,” she said.

Authors:

  • Niall Moran – Principal Technical Evangelist, Microsoft Ireland
  • David Prendergast – Senior Researcher, Intel Ireland
  • Suzanne Little – Lecturer, Dublin City University
  • Dian Zhang – Postdoctoral Researcher, Dublin City University

Business case

One objective of the Croke Park project was to learn about the problems installing and maintaining IoT technology in densely crowded urban environments, but another was to realize how this technology could be deployed to solve real business challenges. With this goal in mind, the research team worked closely with GAA employees to align the technology solution with use cases that could offer business value to the organization.

Sound pollution

An important step in building strong community relations is ensuring that Croke Park is a good neighbor with events that have a minimal impact on those who live nearby. A key dimension of this is environmental sound monitoring. When it comes to decibels, Croke Park must stay within the parameters established by Dublin City Council. Before the Smart Stadium initiative, an independent third party would record the average noise levels and let the stadium know after the fact whether it was in compliance. Now this monitoring is also done in real time from preselected, fixed locations.

An automated solution to this problem solves a number of issues for the GAA:

  • Reduced overhead in sound monitoring. The pre-existing solution is very manual and requires significant effort throughout a concert to record results. The automated microphones are always running, meaning that all events are captured and enabling a solid historical baseline for comparison to be created.
  • Sound data can be disseminated through multiple channels—for example, a website, a publicly accessible app, or a dashboard accessible by key personnel.

Fan engagement

The ability to monitor sound plays a part in enhancing the fan experience as well. The experimental system developed to allow the park to measure the average noise levels outside the stadium for compliance was repurposed to create friendly fan competition within the stadium bowl. Strategically positioned microphones capture maximum decibel peaks in crowd cheering levels, and gateways send this information to the Azure IoT hub. Data is presented on a dashboard to the staff, who in turn project it on a stadium screen, enabling them to “gamify” the data and identify which section is making the most noise. A great example of this is the data that was presented during the 2016 All-Ireland Hurling and Football Finals, which compared the noise levels at particular points in the games: key scores.

Sept. 18 football final, first-half sound analysis

Football final

Health and safety

The Etihad Skyline tour at Croke Park offers visitors unmatched panoramic city views and insights into Dublin’s celebrated landmarks. While a stroll around the top of one of Europe’s largest stadiums can be exhilarating, on a windy day a 17-story-high walking tour can be less comfortable. The team has deployed wind speed and direction sensors that collect the data in real time and feed the information back to the tour organizers so they know whether conditions are suitable to proceed with allowing visitors onto the roof. For the Smart Stadium project team, this seemingly simple implementation has far-reaching implications as it illustrates the direct, real-time connection of sensor data with local decision-makers.

Solution

As well as the business requirements noted above, a number of other functional requirements needed to be considered:

  • For sound monitoring, it is important to strategically position microphones within the stadium. To understand noise levels for both crowd cheer and noise pollution level, microphones must be located both within the stadium bowl and externally. For this reason, four microphones were deployed, two on the east side of the stadium and two on the west side. Each side had one microphone inside the stadium and one outside. Outside the stadium, microphone locations were selected in known areas of significant noise leakage due to breaks, such as access corridors, in the concrete bowl infrastructure.
  • Keeping in mind that the stadium represents a microcosm of a city, an important requirement for the project was that the team learn how best to deploy an IoT solution and use these learnings to help other companies build their own smart-city solutions. This means carefully architecting solutions so that they can scale when applied to the real world.

Engagement approach and team

Preliminary scoping of the project took place in several stages. Intel ran a design-thinking workshop with perspectives from across the employee base at Croke Park, exploring what a day in the life of a smart stadium would look like with particular focus on how to improve the big-match experience. Internal stadium use cases were later worked up as a collaborative co-design endeavor with partners describing local needs and challenges and reviewing technological options, including some devices that were being tested in Dublin City. At key stages, interviews and discussions took place with core employees—for example, the pitch manager and communications officer—to elicit engineering requirements and better understand their work practices, tools, and flows.

Throughout the process the team was carefully guided by senior stadium management. Follow-up interviews and observations are planned toward the end of the deployment to assess usage of the technology and data insights generated. DCU also ran an exercise developing insight stories around two key personae: four 35-year-old sports fanatics traveling long-distance to the stadium and a family attending with children with particular focus on wayfinding, queuing, and age-appropriate entertainment. DCU also worked with Arizona State University on a white paper exploring the ethical implications of IoT within sporting arenas.

One of the challenges in working on the smart stadium project was the various skills sets required, including:

  • Sound and weather monitoring specialists. This activity was primarily carried out by Croke Park staff and Sonitus Systems, a specialist sound-monitoring organization.
  • Gateway management, including deployment, networking, and development. Intel deployed and managed all gateways within the stadium, with GAA IT staff providing backhaul connectivity to the network and Internet, where required.
  • Stadium staff for access control and health and safety monitoring.
  • Cloud specialists to handle the ingestion and analysis of collected data. Cloud capabilities were provided by Microsoft.
  • Business intelligence and user experience experts who understand and define use cases as well as develop dashboards and user interfaces for displaying the data in effective ways. All BI dashboarding was provided by Microsoft.
  • Data scientists to analyze data and develop predictive models to proactively act on intelligence extracted from historical data. Data science work was carried out by a team of research scientists at Dublin City University and Intel.

Bringing all of these resources together and successfully managing the delivery of each use case was challenging and required a governance model managed by two core teams:

  • A central governance team responsible for agreeing on use cases and alignment between all parties. This team managed the budget for all delivery and provided the direction for prioritizing the delivery of specific use cases.
  • A core technical team responsible for designing and implementing solutions for each use case.

Technical solution

The technical solution was designed based on the above business, functional, and non-functional requirements. The following table details the components along with the partner responsible for the deployment and operation and notes on how these components satisfied requirements:

Component Provider Details/Links Notes
Sound monitoring equipment Sonitus Systems EM2010 sound level monitor 4 sound monitoring microphones positioned around the stadium, 2 internal and 2 external as per stadium map.
Gateway Devices Intel 4 Intel Quark™ gateways positioned strategically around the stadium. The key criteria for determining location was based on the networking access and line of sight between sensors and gateways within optimal transmission distances for the RFBee radios used in the devices.
Master Gateway DCU Central Dell machine running Ubuntu 14.4 LTS This machine aggregates and collates all data from the gateways to be pushed to the cloud. This removes the need to have internet connectivity for each gateway and gives us a certain amount of resilience as data can be collected and stored. This device and the processes that run on it have recently been migrated to a virtual machine running Ubuntu 16.04 LTS.
Azure Cloud Microsoft Microsoft Azure IoT Services The Azure cloud is used to provide all back end and business intelligence functions including, device registration, security, data ingestion, real time analytics, storage and display.
Cognitive Models DCU DCU work with all of the data collected to analyse quality and help direct the overall architecture including position of microphones as well as how to deal with interference. DCU are also working on developing machine learning models that can be used to predict outcomes based on data feeds. For example, the likelihood of the Skyline tour being cancelled at certain times.

Security

The security of the smart stadium IoT solution has been built in at a number of layers to ensure the integrity of the data captured at the edge, processed by the gateway devices, and ultimately transmitted to the cloud. To achieve this “security in layers” approach, the following measures were taken:

  • The gateway devices use Intel’s Wind River solution, which adds an enterprise-grade layer of security on top of the base Linux distribution. The security includes locking down the gateway OS including encrypting any connection strings required by the gateway to communicate with back-end systems, protecting against any endpoint tampering that could compromise the system.
  • Each gateway has a unique registration with the Azure IoT hub, ensuring that if a device was compromised, it couldn’t compromise the entire solution. The device can then be further controlled from the IoT hub, allowing a compromised device to be taken off the network.
  • All data sent to the Azure IoT hub is sent over HTTPS and hence is encrypted.
  • Data is stored in an Azure SQL Database with transparent data encryption enabled. This encrypts the entire storage for the database, including data files, log files, and associated backups.

Architecture

The fundamental premise of the technical solution was to design with the following architectural concepts in mind:

  • Loosely coupled components. This meant that each component used was not dependent on any other component or could easily be replaced, updated, or removed without affecting the entire solution. The benefit of this approach was that the team could test individual components and replace or update independently when required.
  • Queue-centric approach. Following on from the loosely coupled approach, the project team wanted to build as much resilience into the solution as possible. For example, one of the challenges within the project was positioning of microphones in relation to the radio antenna used to provide connectivity for the microphone gateways back to the master gateway that communicated with the cloud. Messages sent to the master gateway are forwarded to the cloud using the IoT hub. If there is an issue with connectivity, this data is still stored on the Sonitus system and logged on the gateway. This same principle was adopted in the cloud where different services performed separate functions and communicated with each other via queues.
  • Separation of concerns. As well as loosely coupling components and queuing communications between them, each component was designed to provide a certain function and nothing else. This again supports the maintainability and extensibility of the solution by allowing each component to be updated without affecting the entire solution. This proved critical in this IoT project as there are so many components doing different things. The best example of this is separating ingestion from real-time communications via the IoT Hub and Stream Analytics respectively. When we wanted to update or amend a new real-time query to the data, we could stop the Stream Analytics job without affecting the ingestion and update the queries before restarting the service. This is fundamental to creating a solution that could scale to a globally deployed IoT scenario.

Components

The complete solution is made up of a number of components, both within the stadium and in the cloud. The following section details these components and explains how they interact with the above architectural principles in mind.

Sensors and gateway equipment

Sound monitoring

In order to capture noise levels throughout the stadium, sound monitoring equipment was positioned at four points—two within the stadium at the stands and two outside the stadium. This allowed us to measure crowd cheer within the stadium but also compare this to external sound to monitor noise pollution for neighboring areas. The following photo shows the position of one of the Sonitus microphones. Sound data is measured by the microphones and averaged over a 1-minute period and then sent to the closest gateway where it is then sent to the cloud via the master gateway.

Weather station

A weather station was also deployed to measure wind speed and other data and was positioned at the top of the stadium between the Cusack and Davin stands. The following diagram shows a schematic of all of the different sensor equipment deployed as well as the gateways. Weather data is sent to the gateway every 30 seconds and then sent to the cloud via a master gateway.

Schematic

Gateways

The job of the gateways is to collect the relevant sensor data, both sound and weather, and communicate the data back to the master gateway. To connect to the gateways, each piece of monitoring equipment is connected to a Seedstudio RFBee v1.1 using simplex communication. Each unit is set up in Transceive mode (Send and receive), Baud rate 9600 8N1 and no flow control. These units are attached via USB connection and use a UartSBee adapter. The data is transferred in wireless serial mode using UART between the RFBee unit and the gateway.

The gateways used are E100-8Q from SuperMicro based on the Intel Quark Processor and are powered by Wind River Linux 5.0.1.

Master gateway

This initial work done, each gateway then sends data to the master gateway which in turn sends data to the cloud, specifically an IoT hub. This allows the data to be aggregated and cached centrally within the stadium for edge analytics. Once the master gateway receives the data, it then connects to the IoT hub and transmits the data. The master gateway has been recently upgraded and migrated to a virtual machine running Ubuntu 16.04 LTS.

IoT hub

Azure IoT Hub is a cloud service responsible for device registration, securing the data transfer and high-volume data ingestion. In this case the master gateway is the only device registered with the hub and a unique device ID is passed in all payloads so we can establish which gateway the data originated from. This means that the IoT hub is sent all data collected, including weather and sound data. The data format must be JSON and is shown below.

To connect with the IoT hub, the master gateway can send data over MQTT, IMQP or HTTP and must use an encrypted tunnel. There are a number of SDKs available, but in our case LUA scripts were used to connect directly with the REST API and sent the data over HTTPS. The following diagram shows a snippet of this LUA script as well as the JSON structure for the payload.

Stream Analytics

Once data has been ingested into the IoT hub, Stream Analytics is used to analyze the data and create a stream of data for each use case. Each stream of data is then outputted to a table in the SQL database. There are two Stream Analytics jobs, one to handle weather data and another to handle sound. Each job has the IoT hub as input and defines a number of outputs. The following table outlines each Stream Analytics query, their inputs, queries, and outputs:

Stream Analytics Job Input Outputs Queries Description
CrokeParkStreamAnalytics IoTHub sql Sends all sound data points to a SQL database table. The LAMax values are used to evaluate intensity of crowd cheer.
IoTHub sqlrolling Retrieve a rolling average of LAEQ values to determine a normalized view of sound data for noise pollution. The average is calculated as a logarithmic average.
IoTHub soundblob All data points are sent to blob storage for diagnosis.
CrokeParkWeatherStreamAnalytics IoTHub Sql Raw weather data is sent to a SQL table.

SQL database

An Azure SQL database is used to store relevant data for analysis or display. SQL Database is a fully managed relational database service hosted on Azure and offers 99.99% SLA for availability, which helps this solution scale to a full production system. The database also supports 100 database transaction units but can scale to 4,000 units when required. The DTU is a blended measure of performance that can be used to get predictable performance.

The database tables are populated by the Stream Analytics jobs described previously.

BLOB storage

As well as storing the data in a structured store like SQL Database, we have also used BLOB (binary large object) storage to store all raw JSON data that is sent to the IoT hub. This has allowed the team to query the JSON directly and review the format as well as diagnose any issues with data ingestion. This also allows us to take sample data to test with Stream Analytics jobs’ queries.

The following image shows how the storage account can be monitored, including data inserts over time and egress of data. For the purposes of this pilot, a geo-replication storage account was used. This meant that all data was replicated three times within the Dublin Azure region but also replicated out to the Netherlands region in case of a disaster. This is important as this forms the basis of our disaster recovery plan. If something happens in Dublin we can rebuild the entire system using the data stored in this BLOB account. Even if Dublin were completely down due to some natural disaster, all data could be retrieved from the Netherlands.

Web API

The Web API application was created in Visual Studio 2015 and contains a number of REST APIs to access the data stored within the SQL database. This REST API is then used by front-end dashboards or apps that need access to any of the data.

Web dashboard

A simple web dashboard was created to present the data online. The dashboard was built using the MVC framework within Visual Studio 2016 and used Power BI Embedded to embed Power BI dashboards. The Power BI dashboards were built using Power BI Desktop and then uploaded to Microsoft Azure to make them available to the web dashboard. To get started building the dashboard, sample code from the Azure team was used.

Two Power BI reports were created and uploaded to an Azure Power BI workspace. Both reports connect directly to the SQL database described previously and use a number of views to present the following information:

  • 15-minute rolling average sound data and maximum spikes for the last 20 minutes, 60 minutes, 2 hours, 1 day, and any specific date to review historical data. This data is represented in a single Power BI data set as tabs; see the following diagram.
  • Most recent wind speed data, average wind speed data for the current day, and a time series graph of wind speed for the current day. This graph helps indicate the likelihood of a skyline tour taking place.

The following diagrams show the dashboard interfaces:

Conclusions

The purpose of this project was primarily research and some decisions made were done so in this light. For example, choice of radio and gateway devices would be different when this project is rolled out for full production. This said, conducting this research project with a clear view of the business models and use cases from the outset has enabled all stakeholders to learn exactly what is involved in developing and deploying an IoT solution that can drive business value. To do this, the project team had to ensure that certain elements of the infrastructure and the outputs of the solution were robust enough to allow us to demonstrate real business value with a clear view of total cost of ownership and return on investment. To achieve this, we did several things:

  • From the outset, the team carried out several stakeholder workshops and interviews to scope use cases and define user requirements.
  • The project engaged with Sonitus Systems, an industrial sound monitoring organization commissioned by Dublin City Council to monitor sound pollution around the city. This proved invaluable in designing the dashboard interfaces and allowed us to prove that collecting the data centrally and sharing this data via the dashboard could greatly improve the productivity of the sound monitoring exercise and at the same time provide a mechanism to compare site measurements against sound pollution regulations in real time and potentially alert sound desk engineers of impending infractions.
  • Once the solution was ready we engaged with the GAA communications team to implement a fan engagement scenario, the crowd cheer or “the roar of the 16th Player,” during the All-Ireland intercounty football and hurling finals. In addition to engaging the crowd via the big screens in the stadium, this simple scenario saw the GAA get over 25,000 impressions on one single tweet sent during the hurling final alone. This engagement far exceeds anything that was done before and offers real value for the GAA in engaging with their members and fans.
  • The use of Microsoft Azure and Power BI as the back-end ingestion and analytics platform allowed the team to deploy a solution quickly as the IoT and dashboarding services available are pre-built for these types of scenarios. This resulted in a lot less coding and development work than would have been required if the solution was built from the ground up. Using Azure also meant we were using a robust platform that could handle the relatively low amounts of data we were processing for this pilot but could scale to handle a significantly larger deployment and keep the costs in line with this scale. This has allowed the team to more accurately predict the cost of the solution as it grew.

Learnings

  • Team makeup and governance. As this IoT project has proved, quite a few elements need to work together, from sensors to gateways to data collection and analysis. This means working with a variety of different technologies that require different skills and resources to get working well. It also requires working with a number of partners and/or vendors to realize the value of the project. Creating a good governance framework and rhythm of business from the outset was critical for the success of the project. This involved a biweekly steering group meeting to agree on strategy and report on progress with representatives from all partners as well as a technical core team that discussed implementation details and reported back up to the steering committee.
  • Health and safety. Due to the nature of an IoT project, equipment must be installed and/or configured on site. For this project we were challenged with connectivity between microphones and gateways and had to move gateways to find the optimum positioning to ensure connectivity. Doing any of this work requires physical access to stadium stands and requires personnel on site to support. These activities can take time and need to be accounted for in project plans. Coordination of schedules and management of time to fit into busy working calendars across different stakeholders is a major challenge. A stadium constantly has big events and ongoing construction and IT upgrades. Staff are busy with day jobs and although often enthusiastic and helpful, it can be hard to find the right person for the right task at the right time. Getting access to stand areas, ladders, coordinating between different facilities groups, regulations, opening ports, all can mean a task estimated to take a morning can take several days. The team concluded that the most efficient model is for the deployment engineer to embed locally and spend a block of 3-4 days on site rather than splitting a task across repeat visits.
  • Environment and resilience. Large buildings much like large organizations evolve over time and often unevenly. Behind the streamlined front-of-house processes of a stadium, council or company lie many processes, silos, personalities, and fiefdoms. Equally, IT systems are not guaranteed to be interoperable and networks of cables are seldom clearly mapped out and understood. As this was a proof of concept, some of the equipment used and how it was connected was not robust enough if this solution was to be used in a production environment. RFBee radio connectivity, for example, is not robust enough and meant we spent a lot of time positioning gateways and microphones for optimal connectivity. A stadium is a complex and difficultly built concrete-and-steel environment with multiple radio frequencies and interference. Most sensors used were wireless and we have faced challenges in antennae direction and 868 & 868 MHz for range and penetration. The metal turnstile gateway required considerable effort to extend and mount an external antenna. It is also important not to underestimate the disruptions that can be caused by environmental factors such as temperature, moisture, air and animals to an IoT system. These experiences are steering how Intel is thinking about the selection of materials and components and redefining operational capabilities. For example:
    • New separate enclosures for gateways had to be created, and power supplies and RF connectors for sensor nodes had to be hardened and improved.
    • New brackets had to be created to support large birds sitting on and moving cameras.
    • Relocating hardware in secure locked areas to avoid accidental interference by non-project members.
  • Packet loss. Loss of data packets from the sound level monitoring system was experienced, especially during big events such as the Hurling and Gaelic Football finals, when the stadium is filled with over 82,000 fans. This is an issue that is not unique to this deployment. Mobile carriers and Internet providers have spent decades to find a solution to establish fast and reliable wireless data communication channels in this kind of highly dense and highly dynamic environment. It was difficult initially to understand where the loss occurred; however, after investigation, a number of issues were discovered:
    • JSON format. The Azure back-end setup relies on data being sent in valid JSON format before being analyzed and aggregated in real time. Any invalid value or changes to the format resulted in dropped packets by Azure. New functionality in Stream Analytics makes it very easy to test real-time analytics scripts against sample data, but there seems to be no real way of dealing with invalid formats. To resolve this, all data was pushed to BLOB storage so that we could analyze all packets to understand where formats were changing. This helped greatly in agreeing on a strict format and sticking with it as well as making the Stream Analytics jobs flexible enough to handle new data sent within the packets. Invalid JSON was much more difficult to deal with and is simply dropped at the moment. It is worth considering using routines on the gateways or master gateway to validate JSON formats before sending to Azure and alerting or logging results. In addition, a two-way communication mechanism may be applied so that Azure can request the gateway to re-send the missing/damaged data at quieter times.
    • The current setup may suffer from radio signal interference in the high-dynamic and high-density environment. The motion from human bodies and the signals from their mobile devices along with wireless signals from TV broadcast crews, Garda officers, and security teams may all interfere the wireless data communication between the microphone and the gateway. This can be confirmed from the following graph, showing that the number of packets lost starts decreasing when the game is over and almost no packets were lost as soon as all the fans exited the stadium. This issue is well recognized and a project is under way to build more resilience into to the system.

A sample of sound data packets lost at a match day (2016 Gaelic Football final replay, over 82,000 fans in attendance)

The network traffic at Croke Park during a game day. A huge network traffic increase occurred during the half-time break (red: incoming traffic; blue: outgoing traffic).

“Black out” periods were also experienced at the half-time break, where the Azure platform received no data from the sound monitoring system. In addition to the causes discussed above, we have theorized that people initiating connections with their mobile devices to the Croke Park free Wi-Fi network at the start of half-time generate a huge network traffic at the half time break (as seen in the graph above). This may jam the data transmission between the gateway and Azure, which further increases the data lost. A solution to this will also be tested in the project under way as described above.

  • Payload frequency. The edge microphones aggregated data and sent pay loads to the gateways every minute. This frequency was perfectly fine for the sound pollution use case when data is aggregated over 15-minute periods; however, this could mean missing a spike at a match. As this solution proved very successful for fan engagement, the team will be investigating opportunities to increase the data frequency.

Additional resources

Posted in Business Model, Integration, Knowledge, Software architecture | Leave a comment

How to use PostMan to check Predix Service

Copy from https://vyatkins.wordpress.com/2016/02/04/cloud-foundry-starter-guid-for-predix-io/

Update a client

uaac client update client_name –scope  acs.attributes.write,acs.policies.read,acs.policies.write,acs.attributes.read

After client update to refresh cache by next command
$ uaac token delete

If you need to change password of your client  you need to add password.write permission in scope and authorities of the client.

$ uaac secret change -h
secret change        Change secret for authenticated client in current context
–old_secret , current secret
-s | –secret , client secret

How to use PostMan to check Predix Service 

Time Series Service

Get a oauth token

OathCallForToken

Use uaa url from app environment field “issuerId” and added headers: Content-Type: application/x-www-form-urlencoded and Authorization: Basic

When you have authorization token post request to the Predix service like Time Series below

PostTimeSeriesRequest

Get Predix-Zone-Id value from app environment field and added Bearer before the token in Authorization header value.

A body of the request may contain next values;

Request body
{
    "start": "15d-ago",
    "end": "1mi-ago",
    "tags": [
             {
              "name": "RMD_metric1"
             }
      ]
}

Asset Management Service

Post
asmPostLocomotive

GetasmGetLocomotive

Get with filter and field with complex json

https://predix-asset.run.aws-usw02-pr.ice.predix.io/windturbine?filter=location.lat=37.767*

Screen Shot 2016-05-12 at 10.09.42 AM

Patch

Patch body
[
  { "op" :"replace",
    "path" : "/serial_no",
    "value":"N123-0857";}
]

“op” – operation
“path” – field name (in our case serial_no)
“value” – new value for serial number for locomotive with uri – /locomotives/1

for more details please check RFC 6902 JavaScript Object Notation (JSON) Patch for

Patch operations examples
[
 { "op": "test", "path": "/a/b/c", "value": "foo" },
 { "op": "remove", "path": "/a/b/c" },
 { "op": "add", "path": "/a/b/c", "value": [ "foo", "bar" ] },
 { "op": "replace", "path": "/a/b/c", "value": 42 },
 { "op": "move", "from": "/a/b/c", "path": "/a/b/d" },
 { "op": "copy", "from": "/a/b/d", "path": "/a/b/e" }
]

Screen Shot 2016-05-07 at 6.45.56 PM

Get after patch

Screen Shot 2016-05-07 at 6.47.31 PM

Delete
asmDeleteLocomotive

How to run Scala “Hello World” program on Predix.io

Check that you have sbt on your computer with Java 7.

If not install sbt using

brew install sbt

$ git clone https://github.com/SVyatkin/hello-scala.git

$ sbt clean package

$ cf push

$ curl hello-scala.run.aws-usw02-pr.ice.predix.io

Scala Hello World Example on Predix.io

Useful links:
Cloud Foundry Docs

 

Posted in Knowledge, Problem solving, Software architecture | Leave a comment

Microsoft Azure vs. Amazon Web Services: Cloud Comparison

Direct side-by-side comparisons aren’t always possible between two service providers like Azure and Amazon, but some of them are close enough. The table below is an attempt at making those comparisons. This list of services is far from complete.

Microsoft Azure Amazon Web Services (AWS)
Available Regions Azure Regions AWS Global Infrastructure
Compute Services Virtual Machines (VMs) Elastic Compute Cloud (EC2)
Cloud Services
Azure Websites and Apps
Amazon Elastic Beanstalk
Azure Visual Studio Online None
Container Support Docker Virtual Machine Extension (how to) EC2 Container Service (Preview)
Scaling Options Azure Autoscale (how to) Auto Scaling
Analytics/Hadoop Options HDInsight (Hadoop) Elastic MapReduce (EMR)
Government Services Azure Government AWS GovCloud
App/Desktop Services Azure RemoteApp Amazon WorkSpaces
Amazon AppStream
Storage Options Azure Storage (Blobs, Tables, Queues, Files) Amazon Simplge Storage (S3)
Block Storage Azure Blob Storage (how to) Amazon Elastic Block Storage (EBS)
Hybrid Cloud Storage StorSimple AWS Storage Gateway
Backup Options Azure Backup Amazon Glacier
Storage Services Azure Import Export (how to) Amazon Import / Export
Azure File Storage (how to) AWS Storage Gateway
Azure Site Recovery None
Content Delivery Network (CDN ) Azure CDN Amazon CloudFront
Database Options Azure SQL Database Amazon Relational Database Service (RDS)
Amazon Redshift
NoSQL Database Options Azure DocumentDB Amazon Dynamo DB
  Azure Managed Cache (Redis Cache) Amazon Elastic Cache
Data Orchestration Azure Data Factory AWS Data Pipeline
Networking Options Azure Virtual Network Amazon VPC
Azure ExpressRoute AWS Direct Connect
Azure Traffic Manager Amazon Route 53
Load Balancing Load Balancing for Azure (how to) Elastic  Load Balancing
Administration & Security Azure Active Directory AWS Directory Service
AWS Identity and Access Management (IAM)
Multi-Factor Authentication Azure Multi-Factor Authentication AWS Multi-Factor Authentication
Monitoring Azure Operational Insights Amazon CloudTrail
Azure Application Insights Amazon CloudWatch
Azure Event Hubs Amazon Kinesis
Azure Notification Hubs Amazon Simple Notification Service (SNS)
Azure Key Vault (Preview) AWS Key Management Service
Compliance Azure Trust Center AWS CLoudHSM
Management Services & Options Azure Resource Manager Amazon CloudFormation
API Management Azure API Management Amazon API Gateway
Automation Azure Automation AWS OpsWorks
Azure Batch
Azure Service Bus
Amazon Simple Queue Service (SQS)
Amazon Simple Workflow (SWF)
Visual Studio AWS CodeDeploy
Azure Scheduler None
Azure Search Amazon CloudSearch
Analytics Azure Stream Analytics Amazon Kinesis
Email Services Azure BizTalk Services Amazon Simple Email Services (SES)
Media Services Azure Media Services Amazon Elastic Transcoder
Amazon Mobile Analytics
Amazon Cognitor
Other Services & Integrations Azure Machine Learning (Preview) Amazon Machine Learning
Azure Functions AWS Lambda (Preview)
Service Bus AWS Config (Preview)

From http://www.tomsitpro.com/articles/azure-vs-aws-cloud-comparison,2-870-2.html

Posted in Business Model, Problem solving, Programming, Software architecture | Leave a comment