Address. c/o RISE, Isafjordsgatan 22, 164 29 Kista, Sweden

Tel. +46 70 7687109

This site was built with Wix, which sets cookies. We do not show an annoying banner,

but encourage everyone to install Privacy Badger or similar tools to limit third-party cookies.

Partnership & pricing.

Offerings

Scling's primary business is data-value-as-a-service, described below. In addition, we provide strategic consultancy services, as well as data engineering courses. Pricing details are found at the bottom of the page. If you consider partnering with Scling, you should read this page, but also this document, which describes a partnership with Scling in more detail.

Partnership

A partnership with Scling is an agile contract and an iterative journey together. In order to build complex data features, it is essential to start simple, and learn in small steps along the path. Scling therefore offers a business model where customers subscribe to a value stream of development deliverables, which incrementally improve their data flows.
 

The starting point of the journey to data maturity depends on your current situation and level of data maturity. For example, the journey might start by us together forming an inventory of available data within your company, and hold workshops to determine use cases with highest business value potential. Before we start ingesting data sources, we together decide the most suitable use cases for analytics or data-driven features. We break down the use cases to form a backlog of development and integration deliverables. Scling edits the backlog and suggests the best path forward, but as a customer, you are in control of the priorities in the backlog. 

Deliverables in a customer backlog are defined such that each deliverable is small but provides tangible business value. Examples of different categories of business value:

  • Internal value, e.g. automated forecasting reports or exposing data flows for interactive analysis.

  • External value, e.g. a personalisation feature for your end users.

  • Compliance, e.g. implementing automation for GDPR right to be forgotten or right to data extracts.

  • Risk mitigation, e,g. cloud security hardening or data backup for disaster recovery.

Working incrementally is critical for data success. We therefore work closely together with you on your journey to data maturity, and only build flows that deliver value, rather than propose large projects or comprehensive self-service platforms. Different customers have different priorities and needs, many of which are unknown when the journey starts. Adapting along the way is necessary, and cutting away unnecessary complexity is essential to make the data journey yield return on investment.

The data-value-as-a-service model

Scling builds and operates data platforms for our customers, including the data flows that run on the platform. From a customer perspective, we provide a data refinement process, where raw data material is ingested at one end, and refined valuable data artifacts are emitted at the other end. Ingested data is stored in the platform, and available for long-term use for applications that benefit from larger data volumes.

Scling’s processing engine is built on open source and standard cloud components in order to avoid lock-in, in case a customer should decide to take over operations. While we believe that customers benefit most from letting us take care of operations, you can also view an engagement with us as a quick way to data flows in production with the option to eventually take over development and operations in-house.

When engaging in a partnership with Scling, you subscribe to a flow of valuable development deliverables - a value stream. You also subscribe to the service of operating and hosting the technical data pipelines that produce the corresponding refined data artifacts. Each development deliverable is a development step that makes a data flow provide more business value to you as a customer. These are examples of value adding development on a data pipeline:

  • Ingest new data, and make it visible in an analytics tool or a dashboard.

  • Combine a flow with a new ingest data source for an analytical purpose.

  • Measuring a data quality metric and displaying on a dashboard.

  • Add a curation step that improves data quality.

  • Implement automated compliance with GDPR requirements to delete individual users.

 

When working towards a larger goal, a deliverable is the smallest incremental improvement that increases business value from a data flow. A deliverable always delivers some concrete business value. Hence, an internal technical change is not a deliverable. Likewise, changing an algorithm parameter is an effort too small to provide value, whereas conducting an experiment with the purpose to determine the appropriate parameter value is large enough.

 

Many deliverables come in the shape of technical functionality improvements on automated data pipelines, but could also be other efforts, such as risk reduction, ensuring compliance, raising availability level, or one-off efforts such as analytical investigation of rare events. 

Integration modes


Scling offers three different modes of integration and collaboration process, depending on clients’ long-term goals. Clients may mix these modes and use different modes for different use cases. The business model and pricing is the same for all three modes.


Business-oriented integration


For companies that have a primary focus outside IT, but nevertheless have valuable data, Scling offers business-oriented integration, primarily adapted for customer employees outside the IT domain, e.g. sales staff or domain experts. Data typically resides in third-party systems, such as customer relationship management (CRM) systems, product lifecycle management (PLM) systems, sales support systems, document stores, email servers, etc. Scling obtains client data by integrating with these systems, or by creating data ingestion mechanisms integrated in business processes, e.g. spreadsheets or email. We can also collect data from externally facing web services.

Refined data results are exposed to client in business-oriented interfaces for consumption, requiring no technical effort on the client side, e.g. web dashboards, forecasting in spreadsheets, daily sales reports in a document store, emailed suggestions of upselling opportunities before customer meetings, etc.


Technical integration

In technical integration mode, the client company internally operates IT services that store valuable data, whereas Scling builds and operates data refinement pipelines. We collaborate to decide the most suitable technical integration for each data source, e.g. file uploads, database dumps, web sockets. Refined data products, such as search or recommendation indexes, are exposed to client services via technical interfaces, e.g. file uploads or internal microservices.


Hybrid teams

Scling also offers a collaboration model where Scling staff and client staff form joint teams, potentially colocated, that collaborate on a daily basis. Data delivery works as in the technical integration case above, but client staff uses Scling’s internal “Orion” platform and writes data processing code together with Scling. There is mutual learning on both sides - client staff learns data engineering and DataOps, while Scling staff learns client domain concepts.

Hybrid team mode is appropriate for clients that seek to eventually graduate and operate their data platform without Scling. It is also appropriate for domains that are complex enough to require a tight cooperation between domain experts and data engineers / data scientists, e.g. industrial manufacturing processes.
 

Pricing​

Scling customers subscribe to a value stream of deliverables per month, at a minimum of two per month. We aim to keep the value stream flowing at a steady rate, and any fluctuations should go in alternate directions. The price for a deliverable value stream is 20K EUR for two deliverables per month, 40K EUR for four per month, and so forth. 

 

In addition to the value stream, customers also subscribe to the service of operating and hosting the technical data pipelines that produce the corresponding refined data artifacts. The operations price is proportional to the number of functionality-adding deliverables that have been implemented. These are examples of functionality-adding deliverables:

  • Ingest a new data source and use in a pipeline.

  • Increase service availability level.

  • Make data available on a REST API.

  • Arrange data backups for disaster recovery.

Deliverables that merely change functionality, e.g. modifications to an algorithm, do not increase complexity and therefore not operational cost. These are examples of deliverables that do not increase functionality, or that the addition is small enough to not increase complexity:

  • Make experiments to determine the desired value of an algorithm parameter.

  • Change the algorithm in a pipeline or add a new field, without adding new sources.

  • Perform a disaster recovery exercise.

The price for operations is 1K EUR per month for each functionality-adding deliverable. The operational fee includes data storage of ingested data for 10 years. 

Needs and data pipelines change, and whenever you no longer need a pipeline, or want a new version, the operational subscription cost is removed for the decommissioned pipeline.

Your data will be processed by technology that is naturally scalable. Hence, we can handle large data volumes, but there is increased cost in complexity and operations due to size. Ingress data up to 1GB (uncompressed) daily or processing steps handling up to 100GB datasets at a time is included in the base price. Scaling above that will induce increased operational cost - another 1K EUR / month for each 2x increase in data size for a source, up to 100GB / day or 10 TB processing datasets. We can handle larger volumes, but the compute resource costs increase linearly thereafter.

The pricing above assumes that your use cases can be solved with batch processing and traditional business logic. Batch processing systems induce a delay of at least 5 minutes from data ingestion time to the time that refined data can be consumed. This is adequate for at least 95% of use cases in most domains. For use cases that require shorter delay, stream processing is required. Each deliverable requiring stream processing is counted as four in the value stream, and the operational pricing for such deliverables is also four times a standard deliverable.

 

Use of machine learning technology also increases complexity. The same factors of 4x for development and 4x for operations apply to machine learning deliverables as for stream processing deliverables.

Pricing, summary

 

All prices exclude VAT.

Data-value-as-a-service

Development

  • 20 KEUR / month for a value stream of two deliverables per month

  • Machine learning: Each deliverable counts as 4 standard deliverables.

  • Stream processing: Each deliverable counts as 4 standard deliverables.

Operations

  • 1 KEUR / month for each functionality-adding deliverable, see above. Includes single copy storage for 10 years.

  • Machine learning: Each deliverable counts as 4 standard deliverables.

  • Stream processing: Each deliverable counts as 4 standard deliverables.

  • Ingestion of a data source with a rate above 1GB / day (uncompressed):
    1 KEUR / month for each 2x increase in data size.

  • Data pipeline requiring more than 100 GB to be processed​ at once:
    1 KEUR / month for each 2x increase in data size.
     

Data & AI strategic consulting

Advisory and strategic consulting: 250 EUR / 2700 SEK per hour.

Prepared presentations: 4 x 250 EUR / 4 x 2700 SEK per hour. Minimum time 30 minutes.

Data engineering education

Three-day onsite data engineering course, hosted by the customer, max 16 participants: 17 KEUR / 180 KSEK.

Three-day data engineering course, public offering in Stockholm: 22 KSEK / person.