• Skip to primary navigation
  • Skip to main content
  • Skip to footer
Scinonova logo
  • Start
  • Om oss
  • Karriär
  • Blogg
  • Kontakt
  • Svenska

Scionova

Bluetooth Low Energy Audio

30 april, 2020 by Scionova

In this blogpost I will give a brief overview of the new Bluetooth Low Energy (BLE) Audio features enabled by the recently released Bluetooth 5.2 Core Specification.

Previous BLE blog posts

We have previously written two blogposts containing information about BLE and the use cases enabled by it. Go read them to get the background of BLE and the state of it up until Bluetooth 5.1.

  • Bluetooth LE in IoT by Erik Dahlgren: Communication Protocols in IoT – Part 3
  • Bluetooth LE in indoor localization by Peter Fredriksson: 2019 – The year of indoor localization

Adding audio to Bluetooth LE

By adding the functionality to stream audio using the Bluetooth Low Energy radio the Bluetooth SIG allows audio streaming for less power consuming devices with hearing aids as the main focus. Some hearing aids have since before been running Bluetooth but in those cases often using external hardware or proprietary protocols and profiles. Now the interoperability and ease of implementing has been greatly improved.

Other than improving Bluetooth audio in hearing aids there are multiple other use cases for the new LE Audio:

  • Better support for “True Wireless” allowing separate streams to each device (e.g. right and left ear). Previously this required proprietary solutions or only one of the earbuds being connected to the audio source (phone) which in turn would relay the audio packages to other earbud.
  • Allowing more Bluetooth sinks (headphones) to be connected to one source at once, e.g. letting you share the music you listen to with a friend.
  • Expanding the already existing broadcasting functionality (see Erik’s post about BLE) with also being able to broadcast audio streams. This could be used for traffic announcements in a train station, guiding tours at museums and a lot of other use cases!

LE Isochronous Channels

The new features described above are solved largely by a new addition to the Core Specification called Isochronous Channels. The way these channels are specified allows for multiple time synchronized Isochronous (audio) Streams within what is called an Isochronous Group. This makes sure that e.g. the correct audio packets are rendered at the correct times and discards packets which are no longer valid.

Connected Isochronous Streams

Variants for both Connected and Broadcasted Isochronous Streams/Groups have been added to the specification, to allow for all the new kinds of functionality for LE Audio. In the image above the arrows visualizing data directions are bi-directional between the master and the slaves, which is true only for Connected Isochronous Stream. Compare to the uni-directional arrows in the image for Broadcast Isochronous Group/Streams below:

Broadcast Isochronous Streams

For implementation details of the Isochronous Streams and Groups, refer to the Core Specification linked to at the bottom of this post.

New audio codec: LC3

Bluetooth LE Audio will use a new codec called LC3 (Low Complexity Communication Codec). According to tests performed by Bluetooth SIG, the new codec perform better audio quality at lower data rates than the current standard audio codec for Bluetooth BR/EDR called SBC. The promising test results would allow for using a lower bitrate while keeping the audio quality. A lower bitrate means less data sent and received using the radio enabling a lower power consumption.

See graph below for comparison in audio quality between LC3 and SBC from the study done by Bluetooth SIG. The rating is based on ITU-R BS.1116-3. The vertical numbers indicate the perceived difference in audio quality compared to the original source:

  • 5: “Imperceptible”
  • 4: “Perceptible, but not annoying”
  • 3: “Slightly annoying”
  • 2: “Annoying”
  • 1: “Very annoying”

Bluetooth Codec Comparison

See Chapter 4: Test Method in the test methods document ITU-R BS.1116-3 for more details regarding how the tests have been performed.

Diving deeper

If you are interested to learn more about the new features in Bluetooth 5.2 (including Isochronous Channels), see Bluetooth 5.2 Feature Overview. To get even more into the details, the complete Core Specification is there for you.

 

// Johannes Jansson, Software Developer

An Introduction to AWS Cloud Development

22 oktober, 2019 by Scionova

My experience from Cloud Development

Roughly one year ago I began developing in AWS (Amazon Web Services). I could then immediately see what all the buzz was about. AWS, and other Cloud platform providers (Azure, GCP etc.), supplies an immense amount of services. These services include both managed and unmanaged, from completely serverless to a provided dedicated server. One where the customer has (almost) complete control. These services provide the tools to develop a scalable, secure, agile and cost-efficient application where hardware, servers and OS management can be abstracted away to allow more focus on the actual code of your application.

In the past year I have been involved in the development of a microservice in AWS, and the past month have been spent on preparation for, and later passing, the AWS Certified Solutions Architect – Associate exam.

 

What problems could be solved by using a Cloud Platform?

Imagine that you are part of a start-up aiming to develop a platform that provides a marketplace for autonomous car owners to make their vehicle available for hire when they are not using it themselves. Traditionally, starting the development of the application requires servers to run the application and databases, as well as other hardware, e.g. network equipment to enable connection to the internet.

As development continues, changes in requirements might need additional hardware to be purchased or existing hardware to be replaced. Building tools and other CI-related tools are required to ensure the quality of your platform, all which need their own servers. Moving the solution to production might require another set of equipment with capacity to handle spikes in the load of a production environment.

To minimize downtime of the platform, back-up servers will have to be purchased. All this equipment is associated with a capital investment which might not be available for your start-up without the assistance of an external investor. In the cloud, services are paid for in a pay-as-you-go model. No up-front investment is required, which better fits the financial situation of your start-up.

 

Serverless Architecture

For the last couple of years, ‘Serverless’ has had a huge impact on the industry. But what does it really mean to have a Serverless Architecture? Traditionally, applications have been installed on a specific physical server, where everything from hardware to application requires maintenance. With a serverless architecture, the only concern for the developer is the code of the application. The rest is left to the cloud provider to maintain. Of course, this does not mean that code is not running on a server, only that it is not important which server the code runs on.

Other than reduced maintenance, a serverless architecture brings automated scaling of your environment, a cost model that depends on the actual utilization of the services and a more microservice-friendly infrastructure.

 

AWS Services

These are some AWS services which could provide an entry point for someone who wants to get acquainted with the AWS ecosystem:

 

  •       AWS CloudFormation (https://aws.amazon.com/cloudformation) – An effective tool to describe your AWS infrastructure resources as code (YAML, JSON) in CloudFormation templates. This enables you to store the specification of your entire infrastructure in version control. When changes are made to the template, CloudFormation will calculate the delta from the deployed set of resources into a change set and thereafter execute the change set. Having your infrastructure as code is also crucial when managing multiple staging environments (Dev, QA, Prod etc.) where it is important that the environments are identical to facilitate code and infrastructure quality assurance. An alternative to CloudFormation is Terraform, which is an open-source, platform independent tool to describe infrastructure (https://www.terraform.io).

 

  •       AWS Lambda (https://aws.amazon.com/lambda) – This is the core of any serverless application developed in AWS. A Lambda basically consists of the code that you want to execute and something that triggers it, e.g. an API is called, or a database table is updated. When the Lambda is triggered the supplied code is deployed and executed and shortly after the execution is finished the deployment will be removed. Any parallel triggering of a Lambda will result in multiple deployments of the code and this will scale infinitely. The cost of a Lambda is based on the number of executions and a combination of the execution time and the amount of memory that is allocated for the Lambda code.

 

  •       Amazon EC2 (https://aws.amazon.com/ec2) – Elastic Compute Cloud – EC2 is a service that provides a virtual machine, or “instance”, on a server in AWS. When deploying an instance, it is possible to choose from an abundance of instance types and pre-configured operating systems with different application-setups. An instance type can be anything from cheaper general-purpose instances to more expensive instances, e.g. an instance equipped with more graphics resources to enable more graphics-intensive workloads or machine learning workloads. Each instance type also has several different sizes to support workloads of varying load.

 

  •       Amazon VPC (https://aws.amazon.com/vpc) – Virtual Private Cloud – VPC is used to set up a network in AWS. The network can then be equipped with subnets with different CIDRs, NAT gateways, Internet Gateways, Load Balancers, services to connect the network to an on-premise network etc. All without setting up any hardware yourself. EC2 instances can be deployed in public and private subnets to provide a tiered application setup where database instances and back-end instances (in the private subnet) are only accessible through the front-end instances (in the public subnet).

 

  •       Amazon ECS (https://aws.amazon.com/ecs) – Elastic Container Service – A container orchestration service provided by AWS that supports docker containers. The service is available in two modes, EC2 and Fargate. The EC2 mode requires the developer to manage the EC2 instances that the containers run on as well as the scaling of the number of the instances, while the Fargate mode is fully managed in this regard. An open-source alternative to Amazon ECS is Kubernetes (https://kubernetes.io).

 

  •       Amazon DynamoDB (https://aws.amazon.com/dynamodb) – DynamoDB is a fully managed NoSQL database that provides great performance and is highly scalable. DynamoDB together with AWS Lambda and Amazon API Gateway (https://aws.amazon.com/api-gateway) provides all the tools required to build a small, simple and completely serverless microservice. A NoSQL database is an excellent fit for applications with well-defined database access patterns, i.e. the queries that will be executed are known at the design phase and the NoSQL table(s) can be designed thereafter. However, for applications with more ad hoc database access patterns a SQL database would be a better choice. This session from AWS re:Invent 2018 (https://www.youtube.com/watch?v=HaEPXoXVf2k) makes a deep dive into DynamoDB and explains when a NoSQL database should be utilized.

 

  •       Amazon S3 (https://aws.amazon.com/s3) – Simple Storage Service – A managed object storage service which provides a whopping 99.999999999% of durability for any stored object, which is achieved by storing the objects in multiple data centers across a Region (a Region comprises a set of Availability Zones, which in turn comprises a set of data centers). S3 is used for storing different types of files, e.g. videos, images and documents. Some features of S3 includes versioning of objects, replication of objects to another AWS Region, archiving objects to cheaper storage options when the object is no longer frequently accessed (e.g. Amazon S3 Glacier https://aws.amazon.com/glacier), hosting static web content etc.

 

Some things to be aware of

As mentioned, cloud development offers lots of opportunities moving forward. However, there are some things to take into consideration when deciding on whether to move your solution to the cloud. Platform lock-in is one thing. Deciding on a cloud provider will most likely make you dependent on that company’s solution, which will make you vulnerable to any changes in price or functionality of the used services.

Another consideration is the difference in price model from traditional development. Services in the cloud, especially managed services, are often priced per invocation and/or per unit of data transferred/processed/stored. This model might not suit solutions that have an even, non-fluctuating load over time, in that case a fleet of EC2-instances could be a better fit. Neglecting this detail when designing a solution could result in unnecessarily high costs for the solution.

Being well-prepared when designing a solution is key to avoid these pitfalls. The AWS Well-Architected Framework (https://aws.amazon.com/architecture/well-architected) provides some guidelines on how to architect a solution with performance, security, cost etc. in mind.

 

If you are interested in starting your journey in the cloud in general and AWS in particular and are eager to learn more about the possibilities and risks of cloud development, do not hesitate to contact me at Daniel.Andersson@scionova.com.

 

Debug your way to understanding

24 september, 2019 by Scionova

If it’s not already, this blogpost will give you some practical tips on how to make your debugger your best friend. If you are new on the job and thrown into a big legacy system, it can sometimes be really difficult to understand the flow of the code. Even when you are familiar with the codebase a particularly complicated piece of code may leave you stumped. A debugger can be a helpful tool to understand what is happening and where information stems from. Going back to the basics and really getting to know how the debugger(s) in your IDE(s) work can be a real boost. 

 

This blog post is not supposed to be a tutorial for any one IDE. Instead I will go through debugging concepts that are present in most IDE, as well as some nifty tricks found in specific debugging tools. I am most familiar with the Eclipse debugging tool, so most of my references will point to there. However, the concepts that I cover are present in most debugging tools. 

 

Get to know your break points

Debugging is stepping through your code, line for line, while being able to monitor the changes in variables in your code. Most of the time you are not interested in each and every line of code. You are interested in a specific part of the code or a specific variable. As such, you want to be able to tell your debugger when it should pause execution for closer inspection. You do this with breakpoints. 

 

The most basic breakpoint will simply stop the execution at the given line; however, you can do much more with breakpoints! One of my mostly used breakpoints is the conditional breakpoint. As the name suggests it will halt execution when a given condition is met or when a value changes. This allows you to focus on what is really important. 

 

A really helpful break point is the exception breakpoint. If you have no idea why an exception is thrown and you want to find out the cause, exception breakpoints will halt execution right when the exception is triggered. It is then easy to see what triggers the exception, and what the cause of that trigger is. 

 

If you are working with a multi-threaded application and you want to follow the execution of a specific thread, you may want to filter a breakpoint on a specific thread. Then you can follow the execution of that thread without having to worry about being confused by another thread. 

 

You can also do some additional things with breakpoints, such as suspending execution when the breakpoint has been hit a certain number of times or suspend all breakpoints until a certain breakpoint has been reached.

 

Get to know the controls 

When you have reached the code of interest for debugging, you may want to follow the execution more closely. To do this you have some progression controls at your disposal. 

 

The basic progression controls for debugging are fairly self-explanatory. The step into option will go into the statement you are currently at. The step over option in contrast, will jump over the statement and show you the result after. This means that if you choose to step over a big method, you do not have to follow it through the entire method, you will only see the result. 

 

The final basic progression tool is the step return, which will take you out to the caller of the current statement. If you are in a method this would mean that you would go to the place where the method was called. 

 

A lesser known, but very useful progression option is the drop to frame option. Using this will take you back to the beginning of the current frame you are in. In the case of a method, this would mean you will be taken back to the very top of the method. This can be very useful while finding out where an issue originates from. You might step over a piece of code with the step over function and find that a method call is the root of your error. Unfortunately, step back is generally not an option while debugging. Drop to frame will take you back to the beginning of the code piece you are currently in though, and you can choose to go into the statement that caused the error again.

 

The drop to frame option is also very useful if your IDE and debugger allows for hot changes. That is, they allow you to change the code while debugging without having to recompile and rerun your application. If you are working on a big project that takes a long time to compile or run this can be a great time-saver. In Eclipse while working with Java, you can do code changes that do not affect method headers and similar big changes. After doing a change and saving you will automatically be brought to the top of frame, allowing very quick feedback for your change.

 

If you are working with applications that might be triggered by another application, you may think that you cannot debug your application properly. These kinds of situations can be really difficult, as it is not the application that you want to debug that drives the execution. However, if you are developing in Visual Studio you are in luck! Visual Studio allows you to attach a process to your debug session. This will in turn trigger the execution in the application you are debugging. To do this go to Debug > Attach to process and select the process to which you want the debugger to attach. 

 

Make friends 

While debugging may seem basic, and the prospect of debugging your own or others code may be daunting, I have found that it really helps me understand the applications that I am developing. While I was still a student, I rarely saw the use of the debugger, it was mostly just a bothersome tool that never wanted to work for me.

 

When I started working in a bigger project with more complicated data structures I realized how useful it can be. Getting familiar with the tools and options available to me showed me how essential good debugging skills are to my work as a developer. It takes some getting used to, but as the saying goes, practice makes perfect, and these days the debugger is my most beloved development tool. I hope this blogpost has given you some food for thought and might have introduced some concepts that you were not aware of before. 

 

Best regards ,

Lisa

 

IoT… as a Business Approach.

12 augusti, 2019 by Scionova

We are living in a turbulent world where competition is becoming hyperturbulent. New and existing companies must take the job seriously of continually initiating and adjusting to the new Industry 4.0. Internet of Things (IoT) technology is causing an immense disruption across many industries with the pace of change increasing every day. However not everything is related to high tech connected solutions or state-of-the-art technology developments, IoT business is more than that.

 

Cutting edge technology…just one more player.

We all take for granted that our TV is connected to the internet, our smartphone communicates with our watch, the smart indoor heating system always delivering the perfect temperature (especially in the freezing Swedish winter) and so on. Yes, Internet has given unlimited access to data and technology for most of the world’s population. But technology is not the only main player to develop an IoT business and monetize from its benefits.

 

The innovation should not be only in technology, it should also consider the development of a new business model and delivery method of IoT services for other organizations and end-customers. Technology can give us a lot of possibilities of creating innovative solutions, but if we cannot materialize it into a business, then a great business might stay as a great design only. IoT business encompasses additional critical players, that together create the perfect match to embark into the “IoT journey”.

 

The center of attention on technology for IoT services means that the business aspects are often overlooked. Successful IoT services are built on a premise of a clearly defined service offering complimented with operational and business models. There is a tendency to treat each of these views in isolation, but effective IoT services onboard these models in parallel.

 

Cultural (tech) fact: 

Did you know that the concept of a smart IoT device was introduced back in 1982? It was with a modified Coke machine at Carnegie Mellon University (Pittsburg, USA) becoming the first Internet-connected appliance. This Coke machine had the ability to report its inventory and whether newly loaded drinks were cold.

Image result for modified Coke machine

The Business of IoT.

I remember a conference where the speaker said: “If you haven’t started in the IoT, you are already late”. That is not completely accurate. IoT will be “alive” for a long time and we need to take advantage of this with new IoT services, ideas and business models. You will never be too late with innovative ideas and IoT offers a vast of possibilities.

 

The basic components around a business are: a good product, a reliable business model and customers. For the latter, we have and plenty of them (at least in terms of connected devices). Intel, for instance, projects a device penetration to grow from 2 billion in 2006 to 200 billion by 2020, which means to nearly 26 smart devices for each human on Earth. Others, as Gartner who is taking smartphones, tablets and computers out of the equation, estimates 20,8 billion connected devices by 2020.

 

Hence we need to understand the business aspects of the disruption caused by IoT and how to take advantage of the coming opportunities.

 

Technology is only one of many tools to be used to develop successful, profitable and sustainable IoT business. There is literature explaining different aspects to consider when developing IoT services to create successful IoT business; but I would like to mention two that I consider the most important:

 

  • Ecosystems:

    In simple words, for IoT to reach its full potential, it will require several ecosystems and currently “non-cooperating” industries to work together to maximize business.

 

  • IoT as a Service (AaS):

    Or as “pay-as-you-grow” model in which customer pays proportional to the usage of the service. This enables initial low investment, scalability and cost controlling.

 

IoT business is not about technology solely, it is a series of multiple aspects to consider that must be attended in parallel. Many of the IoT projects/business are condemned to fail as profitable business if people within the organizations do not consider business relevant aspects as important as technology development during the entire lifecycle product management.

 

Best regards,

 

Why not not Modern CMake

28 juni, 2019 by Scionova

Lately I have worked a lot with the build framework/system in the project of my current assignment in automotive. Doing so I have noticed the benefit of writing CMake conforming with what is called ‘Modern CMake’, or rather the draw backs of not doing it. Therefore, I’d like to take the opportunity to share my experience. 

 

This will not be a complete description of what Modern CMake is, there are loads of articles about that, and here is a good entry point to Modern CMake. However, I’ll give you a few Do’s and Don’ts along with the issues I had when these were not followed. But my main advice is to think of CMake as any other production code and demand quality, readability and maintainability. 

  • Do not use global functions such as include_directories or link_libraries. These often shroud what targets actually use and need. Use functions as target_include_directories and target_link_libraries to modify each target explicitly instead. 
  • Do not modify the CMAKE_CXX_FLAGS in subprojects. The project might change to a compiler that do not support all the flags the old did. This kind of variable should be modified on the top level CMakeLists.txt or preferably in a toolchain file. 
  • Use ALIAS targets so that add_subdirectory and find_package exports the same name for targets. The issue I saw in my current project was when we started to build an external library instead of using prebuilds (or vice versa), and the target_link_libraries of all the dependent components needed to be updated. 
  • Do not use target_inlude_directories with paths reaching outside the directory of the component. The project might change its file structure and all these paths need to be updated. Instead, export the needed header files from the other component, either with target_include_directories with PUBLIC properties or simply export an INTERFACE library. 
  • Provide well defined and documented functions for adding tests on a project level. The main benefit is that it is easier to change the behaviour of the tests and how they are used in CI Gates if naming conventions and label use are centrally enforced/implemented once. 
  • Use cmake_parse_arguments when implementing custom functions. Implementing the same functionality, yourself might introduce unnecessary complexity or obscurity. 
  • Do not overdo the use of variables. When debugging CMake and/or the binaries, it might prove challenging to expand all the variables in your head. 

Further Reading 

 As I mentioned before, there a multitude of articles about Modern CMake and how to follow it, and here are some of them: 

  • Effective Modern CMake 
  • An Introduction to Modern CMake 
  • It’s Time to Do CMake Right 
  • More Modern CMake 

 

Hope you enjoyed my blog post!

 

Best Regards,

Patrik Ingmarsson

DevOps and SRE: Where do you draw the line?

20 juni, 2019 by Scionova

As customers expectations for application performance are high, in most of case development team spends most of the time and efforts building valuable new systems. The delivering teams are under pressure to maintain the stability and reliability of the business systems. This scenario has created an adaptation for DevOps and SREs as a formal practice in the development and Quality assurance community. Let’s discuss what actually DevOps and SRE mean and how they could partner together in an organization.

 

DevOps Engineer:

DevOps is the practice of operations and development engineers participating together in the entire service lifecycle, from design through the development process to production support and suggest that the best approach is to hire engineers who can be excellent coders as well as handle all the Ops functions.

Skills required for a DevOps Engineer:

  • Excellent at scripting languages
  • Knowledge and proficiency with Ops and Automation tools
  • Good Understanding of the Ops challenges and they can support/address during design and development
  • Efficient with frequent testing and incremental releases
  • Soft skills for better collaboration across the teams

More about how to become a great DevOps Engineer:

https://blog.shippable.com/how-to-be-a-great-devops-engineer

 

 

Site Reliability Engineer (SRE):

As Ben Treynor (Founding father of SRE) puts it, “SRE, fundamentally, it’s what happens when you ask a software engineer to design an operations function”. The main goals are to create ultra-scalable and highly reliable software systems.

SRE is a job function that focuses on the reliability and maintainability of systems. It is also a mindset and a set of engineering practices to run better production services. An SRE has to be able to engineer creative solutions to problems, strike the right balance between reliability and feature velocity and target appropriate levels of service quality.

Skills required for an SRE:

  • Ability to post-mortem unexpected incidents to solve future hazards
  • Skilled in evaluating new possibilities and capacity planning aptitudes
  • Comfortable with handling the operations, monitoring and alerting
  • Knowledge and experience in building processes and automation to support other teams
  • Ability to persuade organizations to do what needs to be done

More about how to become an SRE:

https://hackernoon.com/so-you-want-to-be-an-sre-34e832357a8c

What does differentiate an SRE (Site Reliability Engineering) from DevOps? Aren’t they the same?

  1. SRE is to DevOps what Scrum is to Agile. DevOps is not a role, it is sort of a cultural aspect and can’t be assigned to an individual, should be done as a team. While SRE is the practice of creating and maintaining a highly available infrastructure/service and it is a role assigned to software professional.
  2. SREs at times practice DevOps. Wherein DevOps, as considered in the organizations focuses more on the automation part, SREs focus is more on the aspects like system availability, observability, instrumentation and scale considerations.
  3. Both SRE and DevOps are used for the management of the production environment. Primarily, SRE finds and solve some of the production problems themselves. However, the purpose of the DevOps is to find problems and then dispatch them to the dev team for solutions.
  4. Although the concept of DevOps is about handling and coping with issues before they fail, failure is something that we, unfortunately, can’t avoid. DevOps embraces this by accepting failure as something that is bound to happen, and which can help the team learn and grow. In the world of the SREs, this objective is delivered by having a formula for balancing accidents and failures against new releases. In other words, SREs want to make sure that there aren’t too many errors or failures, even if it’s something that we can learn. This formula is measured with two key identifiers: Service Level Indicators (SLIs) and Service Level Objectives (SLOs).
  5. SRE works closely with monitoring applications or services in production where automation is crucial to improve the system’s health and availability. While DevOps primarily focuses on empowering developers to build and manage services by giving the developers measurable metrics to prioritize tasks. There seem to be very fewer people in these segments who can handle senior SRE or DevOps role as it should be someone with a combination of a software engineer, architect, system engineer, and an experienced master.

Even though SRE has been recently into the spotlight. The key is to partner DevOps and SRE to get systems into production effectively and efficiently by thinking about how a system will run in production before it is built. One way to achieve this is by integrating the quality team in the end-to-end DevOps workflow to agree on metrics like code coverage and downtime thresholds.

SRE and DevOps concept can still cause dilemma at some level but it all depends on the company and job profile interpretation. The roles and name might vary but the end of the day the whole world needs a solution and technology becoming a more dynamic and enriching experience.

Reference:

  1. https://landing.google.com/sre/
  2. https://en.wikipedia.org/wiki/Site_Reliability_Engineering
  3. https://hackernoon.com/so-you-want-to-be-an-sre-34e832357a8c
  4. https://blog.shippable.com/how-to-be-a-great-devops-engineer

 

Hope you enjoyed my blog post!

Regards// Ravikiran Talekar

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to page 4
  • Go to page 5
  • Interim pages omitted …
  • Go to page 7
  • Go to Next Page »

Footer

Göteborgskontoret


Theres Svenssons Gata 13,
417 55 Göteborg

Varbergskontoret


Kungsgatan 28b,
432 44 Varberg

Gasell