• Skip to primary navigation
  • Skip to main content
  • Skip to footer
Scinonova logo
  • Start
  • Om oss
  • Karriär
  • Blogg
  • Kontakt
  • Svenska

Connectivity

An Introduction to AWS Cloud Development

22 oktober, 2019 by Scionova

My experience from Cloud Development

Roughly one year ago I began developing in AWS (Amazon Web Services). I could then immediately see what all the buzz was about. AWS, and other Cloud platform providers (Azure, GCP etc.), supplies an immense amount of services. These services include both managed and unmanaged, from completely serverless to a provided dedicated server. One where the customer has (almost) complete control. These services provide the tools to develop a scalable, secure, agile and cost-efficient application where hardware, servers and OS management can be abstracted away to allow more focus on the actual code of your application.

In the past year I have been involved in the development of a microservice in AWS, and the past month have been spent on preparation for, and later passing, the AWS Certified Solutions Architect – Associate exam.

 

What problems could be solved by using a Cloud Platform?

Imagine that you are part of a start-up aiming to develop a platform that provides a marketplace for autonomous car owners to make their vehicle available for hire when they are not using it themselves. Traditionally, starting the development of the application requires servers to run the application and databases, as well as other hardware, e.g. network equipment to enable connection to the internet.

As development continues, changes in requirements might need additional hardware to be purchased or existing hardware to be replaced. Building tools and other CI-related tools are required to ensure the quality of your platform, all which need their own servers. Moving the solution to production might require another set of equipment with capacity to handle spikes in the load of a production environment.

To minimize downtime of the platform, back-up servers will have to be purchased. All this equipment is associated with a capital investment which might not be available for your start-up without the assistance of an external investor. In the cloud, services are paid for in a pay-as-you-go model. No up-front investment is required, which better fits the financial situation of your start-up.

 

Serverless Architecture

For the last couple of years, ‘Serverless’ has had a huge impact on the industry. But what does it really mean to have a Serverless Architecture? Traditionally, applications have been installed on a specific physical server, where everything from hardware to application requires maintenance. With a serverless architecture, the only concern for the developer is the code of the application. The rest is left to the cloud provider to maintain. Of course, this does not mean that code is not running on a server, only that it is not important which server the code runs on.

Other than reduced maintenance, a serverless architecture brings automated scaling of your environment, a cost model that depends on the actual utilization of the services and a more microservice-friendly infrastructure.

 

AWS Services

These are some AWS services which could provide an entry point for someone who wants to get acquainted with the AWS ecosystem:

 

  •       AWS CloudFormation (https://aws.amazon.com/cloudformation) – An effective tool to describe your AWS infrastructure resources as code (YAML, JSON) in CloudFormation templates. This enables you to store the specification of your entire infrastructure in version control. When changes are made to the template, CloudFormation will calculate the delta from the deployed set of resources into a change set and thereafter execute the change set. Having your infrastructure as code is also crucial when managing multiple staging environments (Dev, QA, Prod etc.) where it is important that the environments are identical to facilitate code and infrastructure quality assurance. An alternative to CloudFormation is Terraform, which is an open-source, platform independent tool to describe infrastructure (https://www.terraform.io).

 

  •       AWS Lambda (https://aws.amazon.com/lambda) – This is the core of any serverless application developed in AWS. A Lambda basically consists of the code that you want to execute and something that triggers it, e.g. an API is called, or a database table is updated. When the Lambda is triggered the supplied code is deployed and executed and shortly after the execution is finished the deployment will be removed. Any parallel triggering of a Lambda will result in multiple deployments of the code and this will scale infinitely. The cost of a Lambda is based on the number of executions and a combination of the execution time and the amount of memory that is allocated for the Lambda code.

 

  •       Amazon EC2 (https://aws.amazon.com/ec2) – Elastic Compute Cloud – EC2 is a service that provides a virtual machine, or “instance”, on a server in AWS. When deploying an instance, it is possible to choose from an abundance of instance types and pre-configured operating systems with different application-setups. An instance type can be anything from cheaper general-purpose instances to more expensive instances, e.g. an instance equipped with more graphics resources to enable more graphics-intensive workloads or machine learning workloads. Each instance type also has several different sizes to support workloads of varying load.

 

  •       Amazon VPC (https://aws.amazon.com/vpc) – Virtual Private Cloud – VPC is used to set up a network in AWS. The network can then be equipped with subnets with different CIDRs, NAT gateways, Internet Gateways, Load Balancers, services to connect the network to an on-premise network etc. All without setting up any hardware yourself. EC2 instances can be deployed in public and private subnets to provide a tiered application setup where database instances and back-end instances (in the private subnet) are only accessible through the front-end instances (in the public subnet).

 

  •       Amazon ECS (https://aws.amazon.com/ecs) – Elastic Container Service – A container orchestration service provided by AWS that supports docker containers. The service is available in two modes, EC2 and Fargate. The EC2 mode requires the developer to manage the EC2 instances that the containers run on as well as the scaling of the number of the instances, while the Fargate mode is fully managed in this regard. An open-source alternative to Amazon ECS is Kubernetes (https://kubernetes.io).

 

  •       Amazon DynamoDB (https://aws.amazon.com/dynamodb) – DynamoDB is a fully managed NoSQL database that provides great performance and is highly scalable. DynamoDB together with AWS Lambda and Amazon API Gateway (https://aws.amazon.com/api-gateway) provides all the tools required to build a small, simple and completely serverless microservice. A NoSQL database is an excellent fit for applications with well-defined database access patterns, i.e. the queries that will be executed are known at the design phase and the NoSQL table(s) can be designed thereafter. However, for applications with more ad hoc database access patterns a SQL database would be a better choice. This session from AWS re:Invent 2018 (https://www.youtube.com/watch?v=HaEPXoXVf2k) makes a deep dive into DynamoDB and explains when a NoSQL database should be utilized.

 

  •       Amazon S3 (https://aws.amazon.com/s3) – Simple Storage Service – A managed object storage service which provides a whopping 99.999999999% of durability for any stored object, which is achieved by storing the objects in multiple data centers across a Region (a Region comprises a set of Availability Zones, which in turn comprises a set of data centers). S3 is used for storing different types of files, e.g. videos, images and documents. Some features of S3 includes versioning of objects, replication of objects to another AWS Region, archiving objects to cheaper storage options when the object is no longer frequently accessed (e.g. Amazon S3 Glacier https://aws.amazon.com/glacier), hosting static web content etc.

 

Some things to be aware of

As mentioned, cloud development offers lots of opportunities moving forward. However, there are some things to take into consideration when deciding on whether to move your solution to the cloud. Platform lock-in is one thing. Deciding on a cloud provider will most likely make you dependent on that company’s solution, which will make you vulnerable to any changes in price or functionality of the used services.

Another consideration is the difference in price model from traditional development. Services in the cloud, especially managed services, are often priced per invocation and/or per unit of data transferred/processed/stored. This model might not suit solutions that have an even, non-fluctuating load over time, in that case a fleet of EC2-instances could be a better fit. Neglecting this detail when designing a solution could result in unnecessarily high costs for the solution.

Being well-prepared when designing a solution is key to avoid these pitfalls. The AWS Well-Architected Framework (https://aws.amazon.com/architecture/well-architected) provides some guidelines on how to architect a solution with performance, security, cost etc. in mind.

 

If you are interested in starting your journey in the cloud in general and AWS in particular and are eager to learn more about the possibilities and risks of cloud development, do not hesitate to contact me at Daniel.Andersson@scionova.com.

 

Debug your way to understanding

24 september, 2019 by Scionova

If it’s not already, this blogpost will give you some practical tips on how to make your debugger your best friend. If you are new on the job and thrown into a big legacy system, it can sometimes be really difficult to understand the flow of the code. Even when you are familiar with the codebase a particularly complicated piece of code may leave you stumped. A debugger can be a helpful tool to understand what is happening and where information stems from. Going back to the basics and really getting to know how the debugger(s) in your IDE(s) work can be a real boost. 

 

This blog post is not supposed to be a tutorial for any one IDE. Instead I will go through debugging concepts that are present in most IDE, as well as some nifty tricks found in specific debugging tools. I am most familiar with the Eclipse debugging tool, so most of my references will point to there. However, the concepts that I cover are present in most debugging tools. 

 

Get to know your break points

Debugging is stepping through your code, line for line, while being able to monitor the changes in variables in your code. Most of the time you are not interested in each and every line of code. You are interested in a specific part of the code or a specific variable. As such, you want to be able to tell your debugger when it should pause execution for closer inspection. You do this with breakpoints. 

 

The most basic breakpoint will simply stop the execution at the given line; however, you can do much more with breakpoints! One of my mostly used breakpoints is the conditional breakpoint. As the name suggests it will halt execution when a given condition is met or when a value changes. This allows you to focus on what is really important. 

 

A really helpful break point is the exception breakpoint. If you have no idea why an exception is thrown and you want to find out the cause, exception breakpoints will halt execution right when the exception is triggered. It is then easy to see what triggers the exception, and what the cause of that trigger is. 

 

If you are working with a multi-threaded application and you want to follow the execution of a specific thread, you may want to filter a breakpoint on a specific thread. Then you can follow the execution of that thread without having to worry about being confused by another thread. 

 

You can also do some additional things with breakpoints, such as suspending execution when the breakpoint has been hit a certain number of times or suspend all breakpoints until a certain breakpoint has been reached.

 

Get to know the controls 

When you have reached the code of interest for debugging, you may want to follow the execution more closely. To do this you have some progression controls at your disposal. 

 

The basic progression controls for debugging are fairly self-explanatory. The step into option will go into the statement you are currently at. The step over option in contrast, will jump over the statement and show you the result after. This means that if you choose to step over a big method, you do not have to follow it through the entire method, you will only see the result. 

 

The final basic progression tool is the step return, which will take you out to the caller of the current statement. If you are in a method this would mean that you would go to the place where the method was called. 

 

A lesser known, but very useful progression option is the drop to frame option. Using this will take you back to the beginning of the current frame you are in. In the case of a method, this would mean you will be taken back to the very top of the method. This can be very useful while finding out where an issue originates from. You might step over a piece of code with the step over function and find that a method call is the root of your error. Unfortunately, step back is generally not an option while debugging. Drop to frame will take you back to the beginning of the code piece you are currently in though, and you can choose to go into the statement that caused the error again.

 

The drop to frame option is also very useful if your IDE and debugger allows for hot changes. That is, they allow you to change the code while debugging without having to recompile and rerun your application. If you are working on a big project that takes a long time to compile or run this can be a great time-saver. In Eclipse while working with Java, you can do code changes that do not affect method headers and similar big changes. After doing a change and saving you will automatically be brought to the top of frame, allowing very quick feedback for your change.

 

If you are working with applications that might be triggered by another application, you may think that you cannot debug your application properly. These kinds of situations can be really difficult, as it is not the application that you want to debug that drives the execution. However, if you are developing in Visual Studio you are in luck! Visual Studio allows you to attach a process to your debug session. This will in turn trigger the execution in the application you are debugging. To do this go to Debug > Attach to process and select the process to which you want the debugger to attach. 

 

Make friends 

While debugging may seem basic, and the prospect of debugging your own or others code may be daunting, I have found that it really helps me understand the applications that I am developing. While I was still a student, I rarely saw the use of the debugger, it was mostly just a bothersome tool that never wanted to work for me.

 

When I started working in a bigger project with more complicated data structures I realized how useful it can be. Getting familiar with the tools and options available to me showed me how essential good debugging skills are to my work as a developer. It takes some getting used to, but as the saying goes, practice makes perfect, and these days the debugger is my most beloved development tool. I hope this blogpost has given you some food for thought and might have introduced some concepts that you were not aware of before. 

 

Best regards ,

Lisa

 

IoT… as a Business Approach.

12 augusti, 2019 by Scionova

We are living in a turbulent world where competition is becoming hyperturbulent. New and existing companies must take the job seriously of continually initiating and adjusting to the new Industry 4.0. Internet of Things (IoT) technology is causing an immense disruption across many industries with the pace of change increasing every day. However not everything is related to high tech connected solutions or state-of-the-art technology developments, IoT business is more than that.

 

Cutting edge technology…just one more player.

We all take for granted that our TV is connected to the internet, our smartphone communicates with our watch, the smart indoor heating system always delivering the perfect temperature (especially in the freezing Swedish winter) and so on. Yes, Internet has given unlimited access to data and technology for most of the world’s population. But technology is not the only main player to develop an IoT business and monetize from its benefits.

 

The innovation should not be only in technology, it should also consider the development of a new business model and delivery method of IoT services for other organizations and end-customers. Technology can give us a lot of possibilities of creating innovative solutions, but if we cannot materialize it into a business, then a great business might stay as a great design only. IoT business encompasses additional critical players, that together create the perfect match to embark into the “IoT journey”.

 

The center of attention on technology for IoT services means that the business aspects are often overlooked. Successful IoT services are built on a premise of a clearly defined service offering complimented with operational and business models. There is a tendency to treat each of these views in isolation, but effective IoT services onboard these models in parallel.

 

Cultural (tech) fact: 

Did you know that the concept of a smart IoT device was introduced back in 1982? It was with a modified Coke machine at Carnegie Mellon University (Pittsburg, USA) becoming the first Internet-connected appliance. This Coke machine had the ability to report its inventory and whether newly loaded drinks were cold.

Image result for modified Coke machine

The Business of IoT.

I remember a conference where the speaker said: “If you haven’t started in the IoT, you are already late”. That is not completely accurate. IoT will be “alive” for a long time and we need to take advantage of this with new IoT services, ideas and business models. You will never be too late with innovative ideas and IoT offers a vast of possibilities.

 

The basic components around a business are: a good product, a reliable business model and customers. For the latter, we have and plenty of them (at least in terms of connected devices). Intel, for instance, projects a device penetration to grow from 2 billion in 2006 to 200 billion by 2020, which means to nearly 26 smart devices for each human on Earth. Others, as Gartner who is taking smartphones, tablets and computers out of the equation, estimates 20,8 billion connected devices by 2020.

 

Hence we need to understand the business aspects of the disruption caused by IoT and how to take advantage of the coming opportunities.

 

Technology is only one of many tools to be used to develop successful, profitable and sustainable IoT business. There is literature explaining different aspects to consider when developing IoT services to create successful IoT business; but I would like to mention two that I consider the most important:

 

  • Ecosystems:

    In simple words, for IoT to reach its full potential, it will require several ecosystems and currently “non-cooperating” industries to work together to maximize business.

 

  • IoT as a Service (AaS):

    Or as “pay-as-you-grow” model in which customer pays proportional to the usage of the service. This enables initial low investment, scalability and cost controlling.

 

IoT business is not about technology solely, it is a series of multiple aspects to consider that must be attended in parallel. Many of the IoT projects/business are condemned to fail as profitable business if people within the organizations do not consider business relevant aspects as important as technology development during the entire lifecycle product management.

 

Best regards,

 

DevOps and SRE: Where do you draw the line?

20 juni, 2019 by Scionova

As customers expectations for application performance are high, in most of case development team spends most of the time and efforts building valuable new systems. The delivering teams are under pressure to maintain the stability and reliability of the business systems. This scenario has created an adaptation for DevOps and SREs as a formal practice in the development and Quality assurance community. Let’s discuss what actually DevOps and SRE mean and how they could partner together in an organization.

 

DevOps Engineer:

DevOps is the practice of operations and development engineers participating together in the entire service lifecycle, from design through the development process to production support and suggest that the best approach is to hire engineers who can be excellent coders as well as handle all the Ops functions.

Skills required for a DevOps Engineer:

  • Excellent at scripting languages
  • Knowledge and proficiency with Ops and Automation tools
  • Good Understanding of the Ops challenges and they can support/address during design and development
  • Efficient with frequent testing and incremental releases
  • Soft skills for better collaboration across the teams

More about how to become a great DevOps Engineer:

https://blog.shippable.com/how-to-be-a-great-devops-engineer

 

 

Site Reliability Engineer (SRE):

As Ben Treynor (Founding father of SRE) puts it, “SRE, fundamentally, it’s what happens when you ask a software engineer to design an operations function”. The main goals are to create ultra-scalable and highly reliable software systems.

SRE is a job function that focuses on the reliability and maintainability of systems. It is also a mindset and a set of engineering practices to run better production services. An SRE has to be able to engineer creative solutions to problems, strike the right balance between reliability and feature velocity and target appropriate levels of service quality.

Skills required for an SRE:

  • Ability to post-mortem unexpected incidents to solve future hazards
  • Skilled in evaluating new possibilities and capacity planning aptitudes
  • Comfortable with handling the operations, monitoring and alerting
  • Knowledge and experience in building processes and automation to support other teams
  • Ability to persuade organizations to do what needs to be done

More about how to become an SRE:

https://hackernoon.com/so-you-want-to-be-an-sre-34e832357a8c

What does differentiate an SRE (Site Reliability Engineering) from DevOps? Aren’t they the same?

  1. SRE is to DevOps what Scrum is to Agile. DevOps is not a role, it is sort of a cultural aspect and can’t be assigned to an individual, should be done as a team. While SRE is the practice of creating and maintaining a highly available infrastructure/service and it is a role assigned to software professional.
  2. SREs at times practice DevOps. Wherein DevOps, as considered in the organizations focuses more on the automation part, SREs focus is more on the aspects like system availability, observability, instrumentation and scale considerations.
  3. Both SRE and DevOps are used for the management of the production environment. Primarily, SRE finds and solve some of the production problems themselves. However, the purpose of the DevOps is to find problems and then dispatch them to the dev team for solutions.
  4. Although the concept of DevOps is about handling and coping with issues before they fail, failure is something that we, unfortunately, can’t avoid. DevOps embraces this by accepting failure as something that is bound to happen, and which can help the team learn and grow. In the world of the SREs, this objective is delivered by having a formula for balancing accidents and failures against new releases. In other words, SREs want to make sure that there aren’t too many errors or failures, even if it’s something that we can learn. This formula is measured with two key identifiers: Service Level Indicators (SLIs) and Service Level Objectives (SLOs).
  5. SRE works closely with monitoring applications or services in production where automation is crucial to improve the system’s health and availability. While DevOps primarily focuses on empowering developers to build and manage services by giving the developers measurable metrics to prioritize tasks. There seem to be very fewer people in these segments who can handle senior SRE or DevOps role as it should be someone with a combination of a software engineer, architect, system engineer, and an experienced master.

Even though SRE has been recently into the spotlight. The key is to partner DevOps and SRE to get systems into production effectively and efficiently by thinking about how a system will run in production before it is built. One way to achieve this is by integrating the quality team in the end-to-end DevOps workflow to agree on metrics like code coverage and downtime thresholds.

SRE and DevOps concept can still cause dilemma at some level but it all depends on the company and job profile interpretation. The roles and name might vary but the end of the day the whole world needs a solution and technology becoming a more dynamic and enriching experience.

Reference:

  1. https://landing.google.com/sre/
  2. https://en.wikipedia.org/wiki/Site_Reliability_Engineering
  3. https://hackernoon.com/so-you-want-to-be-an-sre-34e832357a8c
  4. https://blog.shippable.com/how-to-be-a-great-devops-engineer

 

Hope you enjoyed my blog post!

Regards// Ravikiran Talekar

2019 – The year of indoor localization

30 april, 2019 by Scionova

There has always been a need for accurate indoor position systems, but the lack of affordable, standardized and interoperable solutions has been a major problem holding the market back.

Finally, it looks like this will change. During 2019, not one, but two important new specifications will be released that can totally change this market.

Bluetooth 5.1

January 21st the Bluetooth SIG released the 5.1 specification. The most important new features are the direction finding methods Angle Of Arrival (AOA) and Angle Of Departure(AOD). This enables devices to be positioned by calculating the angle to units with known positions. This method is based on that either the receiving (AOA) or sending (AOD) device is equipped with a multi-antenna system. The device switch between the antennas and measure the phase difference.

IEEE 802.15.4z standard for UWB

Like Bluetooth, also the latest standard for UWB includes new features to improve its performance for localization. The UWB solution uses a different approach for positioning, it measures the time it takes for the radio signal to travel from one device to the other. With this time measured and the known speed of light, the distance between the devices is calculated.

So, which one is the most interesting one?

Bluetooth has a long and successful history of providing solutions that get wide adoption in consumer electronics. They understand the need of hard work with interoperability to be successful. UWB, on the other hand, has advantages due to its wide spectrum use that makes it less sensitive to interference.

When it comes to the use case to localize our most common consumer device, the phone, both technologies look very interesting. We see it as very likely that phones and beacons will adopt Bluetooth 5.1 since it solves a problem where Bluetooth has lacked performance.

Looking at the IEEE website it is also interesting to see that major phone manufacturers are the ones driving the new UWB standard. Apple and Samsung are the two most active companies in the work of getting the standards ready for adoption. This indicates that we likely will see UWB in phones at least 2020 and several sites reports that this will be the case.

What about accuracy?

Both claim to provide be able to provide indoor positioning with about 10cm accuracy with 4 fixed devices with known positions within range (around 30m). Naturally, this will depend on both the implementation and the environment.

Sounds Interesting?

We have a long and deep knowledge about Bluetooth and have already started working with UWB for indoor localization and we are ready to take on new challenging projects within this area. We would love to help you to be early on the market with these new interesting technologies.

 

Best Regards

// Peter Fredriksson
Senior consultant

 

Communication Protocols in IoT – Part 3

11 oktober, 2018 by Erik Dahlgren

This is the third part of the series ” Communication Protocols in IoT”. The first part can be found here

Bluetooth is a technology that is probably not unfamiliar to anyone today since literally every smartphone comes with the technology as standard. But how is this technology that historically is most known for audio streaming/handsfree use cases relevant for IOT?

Early history
The roots of Bluetooth can be traced back to 1994 when an engineer working for Ericsson, Jaap Haartsen was tasked to develop a wireless alternative to RS-232 (serial cables). Similar ideas also emerged in other companies around the same time and it quickly became clear that for products from different vendors to be compatible and interoperable. Some sort of standardization was required. In 1996 five companies (Ericsson, Nokia, Intel, Toshiba, and IBM) met in Lund Sweden and agreed that a Special Interest Group (SIG) was to be formed to drive and standardize the technology. Two years later in 1998, the Bluetooth Special Interest Group (Bluetooth SIG) was officially established.

Technicalities and Standards
The Bluetooth technology standard(s) and specification(s) are governed by the Bluetooth SIG. Membership is free for any company (Adopter level) but a higher-level membership (Associate) is available for a yearly fee. Associate membership brings several possibilities for companies who wants to actively participate in working groups, have early access to new specifications and contribute to the development of the Bluetooth standard. There is also a higher (Promoter) level membership which you cannot pay for, this small group of companies consists of the most active contributors to the standard and each company has one seat on the highest governing body of the SIG, The Board of Directors.

Since the release of Bluetooth 1.0 in 1998, a large amount of work has been done to the specifications and profiles covered within Bluetooth technology and the most recent core specification (2018) is at 5.0.

To sell a product with Bluetooth branding/technology, certification is mandatory. This is mainly enforced to secure interoperability between different devices and brands. Although the certification historically has not guaranteed a 100% interoperability between every device supporting the same profile(s) on the market. It does provide a quite rigorous suite of tests and requirements to push for well-implemented products.

Protocol Basics
When talking about Bluetooth technology it is nowadays a bit problematic to talk about “one technology” due to a significant change that was introduced in the core 4.0 standard in 2010. The introduction of Bluetooth Low Energy (BLE) meant that Bluetooth now consisted of 2 different stacks from the physical layer up to the application. Some parts were of course reused and there are absolutely no problems finding chipsets that support both the traditional (Basic Rate (BR)/Enhanced Data Rate (EDR)) standard as well as the new (BLE) standard.

BR/EDR
The traditional Bluetooth stack which is still very relevant today (although maybe not so much in an IoT context) contains profiles for media streaming and control (A2DP/AVRCP), Handsfree (HFP), serial port emulation (SPP), Text message synchronization (MAP) and internet tethering (PAN) to mention some of the most commonly known profiles and use cases. All profiles are described in detail in specifications that are released by the Bluetooth SIG and all vendors are required to implement mandatory parts to enable interoperability. The maximal theoretical (not practical!) transfer speed using EDR is 3 Mbit/s (a “High Speed” (HS) standard also exist where actual data transfer is handed over to WiFi, but this is very seldom used).

BLE
Bluetooth low energy (BLE) on the other hand, was specifically developed and tailored to be used for IoT use cases. The standard offers very low power consumption with several powerful features suitable for IoT use cases.

  • Broadcasts:
    Devices can broadcast limited amounts of data even to devices that are not trusted/paired.
  • Dedicated advertising channels:
    Compared to BR/EDR, BLE uses 3 dedicated advertising channels allowing for a much quicker device discovery (scan 3 physical channels instead of 79 as is the case with BR/EDR). Limiting advertising to only 3 channels also has a positive impact on energy consumption.
  • Flexible protocol definition:
    While some common profiles defined by the Bluetooth SIG exists (Blood Pressure Profile, Heart Rate Profile and Insulin Delivery profile to name a few). BLE makes it very easy to define and create your own profile for your specific needs. By using the protocols and capabilities available in the BLE stack you can easily implement your use case(s).

To improve things further, the latest core specification (Bluetooth 5.0) and the additional MESH specification adds a few more powerful features:

  • 2 Mbit/s PHY:
    Compared to Bluetooth 4.0/4.1/4.2, it’s now possible to have 2x the maximum throughput if your IoT use case require some heavier data transfer (theoretically up to 2 Mbit/s compared to 1 Mbit/s for earlier BLE standards. It is however important to mention that these numbers are solely theoretical and deals mainly with modulation, practical data throughput is far less.).
  • 4x Range:
    The Bluetooth 5.0 standard also introduces another new PHY (Physical layer) that allows Bluetooth to achieve up to 4x the range compared to Bluetooth 4.0/4.1/4.2. Note that despite some less than clear claims during the release of the standard stating “2x the speed and 4x the range”, you can only get one of these properties at a time. You don’t get both since the PHY is significantly different.
  • Mesh:
    BLE now supports Mesh technology making it a suitable technology candidate for large-scale deployments and use cases where each edge device doesn’t need to have a direct connection to a more central node.

Use cases:
Bluetooth is a huge technology (the core 5.0 specification alone is over 2800 pages!) so it’s safe to claim that the technology can support a vast number of different use cases. This is also obvious when looking at the market. Bluetooth has an extremely impressive market penetration since its available in more or less every smartphone. A list of all even common use cases would be extremely long, but you can easily find it in everything from common products such as smart watches, fitness trackers and smart lighting to more exotic implementations in toasters, toilets, and toothbrushes.

 

//Erik Dahlgren, Software Developer

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to Next Page »

Footer

Göteborgskontoret


Theres Svenssons Gata 13,
417 55 Göteborg

Varbergskontoret


Kungsgatan 28b,
432 44 Varberg

Gasell