Thrive Community. Making  Caregiving Less Stressful

Client at a glance

10,000+Active members globally
50+Countries in the community
Over 100Projects & initiatives

Transforming Caregiving Through Technology

By uniting seniors, caregivers, and healthcare professionals on a single platform, Thrive Community’s app fosters a supportive ecosystem for caregiving. Through user-centric design and innovative technology, Wiser helped Thrive Community reduce stress, improve communication, and deliver a seamless experience that supports caregivers and seniors worldwide.

Challenge

Thrive Community envisioned a platform to alleviate the stress of caregiving by centralizing communication, coordination, and updates for families, caregivers, and healthcare professionals. However, the client lacked the resources and technical expertise to transform their idea into a fully realized product. They needed an intuitive solution designed for seniors and caregivers, simplifying caregiving tasks while fostering engagement and connection.

Our Approach

We partnered with Thrive Community to design and deliver an app tailored to their vision of simplifying caregiving while ensuring ease of use for seniors.

Key actions included:

  1. User-Centric Design: Developed an intuitive app for seniors and caregivers with simple account setup and a “check-in mood” feature for easy emotional communication.
  2. Enhanced Coordination: Enabled users to add up to seven circle members, including family and healthcare professionals, to streamline care planning and communication.
  3. Engagement Features:
    • Integrated a photo-sharing news feed and a customizable news module to keep seniors informed and connected.
    • Added Amazon integration, allowing seniors to purchase suitable products directly from the app.
  4. HIPAA-Compliant Communication:
    • Integrated video calls and private chat features via VoIP, enabling virtual check-ups and secure communication.
    • Simplified calling with one-tap access to caregivers or family members.
  5. Cross-Platform Scalability: Developed Android and iOS applications using Flutter, reducing future maintenance costs while enhancing the onboarding journey.

Impact Delivered

  • Simplified Coordination: Centralized caregiving tasks, reducing the need for multiple apps and improving care plan efficiency.
  • Enhanced User Experience: Delivered intuitive applications for both Android and iOS, ensuring accessibility and engagement for seniors and caregivers.
  • Improved Call Stability: Transitioned to a more reliable VoIP provider, enabling seamless communication with push notifications even when offline.
  • Cost-Efficient Development: Leveraged Flutter to reduce maintenance costs while ensuring feature parity across platforms.

Expertise and Scope

  • Deliverables: iOS and Android applications, including news feed, mood check-ins, and video call functionalities.
  • Technology Stack: Swift, Dart, Kotlin, Flutter, Reactive programming, VoIP, WebSockets, Fastlane
  • Team: Multidisciplinary team of developers and UX/UI designers

The Great Divide: Model-centric vs. Data-centric approach

The bread and butter of machine learning (ML) are data and models. As Data Science academic research and competitions focus mostly on improving the ML models and algorithms, in many aspects the data remain overlooked. This creates an artificial division between the data and model in the ML system that starts to frame two separate approaches towards AI – Model-centric and Data-centric.

The benefits of excellent models


A famous quote often attributed to the statistician George Box says that all models are wrong but some are useful. By extension, some models are extremely useful, and some are, let’s face it, useless. To build a good ML solution, you need a model that captures the underlying dependencies in the data, filtering out the idiosyncratic noise and performing well on new, unseen data.

A model improvement can be achieved in various ways. While there are many common recipes and tools for model optimization, for many applications, the modelling work remains affined to the artwork. The usual workflow includes:

  • Testing various model architectures and specifications, different objective functions and optimization techniques;
  • Fine-tuning the hyper-parameters defining the model structure and the model-training process.

What is referred to as a model-centric approach is an activity of dedicating time and resources to reiterating the model. The goal is to improve the accuracy of the ML solution while keeping the training data set fixed.

The more one approaches the realistic limits for model performance, the smaller the room for model improvements becomes and the marginal return on spending time and resources on the task starts to diminish. All this doesn’t say that one has reached the potential for the whole ML solution. There might still be vast room for improvement available.

The benefits of high-quality data


Once you see that you reach the potential of your model on the given dataset, the usual go-to is the universal “get more training data.” This might often be all you need to reach the performance goals of your model. Sometimes though, what you need is not more data, but better data.

The data-centric approach is concerned with how to improve the overall performance of the ML solution by focusing on the quality and sufficiency of the data while keeping the model training part fixed. What the Data-centric approach suggests is not something novel or revolutionary but a reminder that actually no model can be better than the data it was trained on and that improvements in the quality of the data can lead to much higher performance gains for the overall ML solution.

Data consistency, data coverage, label consistency, feedback timeliness and thoroughness, and model metadata are some of the aspects of the data that can improve your ML solution.

  • Consistent data is data, anything else is confusion and ambiguity. Are the ETL (extract, transform and load) pipelines providing you with the clean and systematic data necessary for your ML applications? If the answer is no, then perhaps a greater effort is required to improve upon the relevant processes.
  • The data coverage asks whether the sample you are training your model on is representative of the population your model is going to be used on. If some subpopulations or classes are underrepresented, evaluate what might be the effect of this and, if needed, think about how to overcome this. Often data filtering, rebalancing, or data augmentation might help. Another aspect of the coverage is the content. Are all characteristics relevant for the discrimination between the observations present in your dataset, do you need and can you get additional features for your ML task?
  • Labels consistency – this one is a huge issue for any supervised ML task. From the correct definition of the labels for your ML task to the accurate labelling of the dataset: all aspects can hugely affect the outcome of the model training. There are multiple strategies and techniques that can be useful for improving the labels in your project and it is always a good idea to spend some time checking the quality of your labels manually – even on a very small subset of the data.
  • Monitoring data – once deployed to production, the ML system is not done. Model performance will inevitably deteriorate due to data or concepts drifts. Setting up good monitoring for your model is the first line of defence against such a trend. Often one cannot foresee in which aspect the input data for the model may shift or how the performance of the model may decrease and setting up monitoring on a wider range of indicators and subpopulations may reveal underlying changes faster.
  • Model Metadata – the high quality of an ML system is also akin to transparency and reproducibility. Model performance metrics and means for reproducibility can generally be called model metadata and are also important for easing the work on model experimentation and optimization.

Business and analytic tradeoffs


How to strike the right balance between improving your code and improving the quality of your data? You can – as with any other decision – put some data into use.

Analyze your processes and see what is the ratio of the time spent working on data vs the time spent working on the code for improvement of the accuracy of the ML applications. Time-box the model optimization part, put the model in production when you reach satisfactory results, and start collecting feedback for gaining insight into your model and improving your data set. Prioritize high-quality data throughout all phases of the ML project for the MLOps team.

It might be worth reconsidering also the composition of your ML teams. How many data engineers and analysts vs ML engineers and modellers do you have?

This can be generalized further at an organizational level for any decision concerning your data assets and ML projects. Build and maintain better data infrastructure instead of investing in more ML projects. And consider how better data quality and infrastructure can improve the profitability of the undertaken ML projects.

Where to go from here?


Starting from the investigation phase of the project, spend some time on what would be the upper feasible limit on the performance of the model that is going to be built. If this is a frequently occurring ML task, one can check the literature for what is the level already achieved by other Data Scientists. Alternatively, take a small sample and measure the human-level performance on it. This can be used as a guideline for the feasible model performance regarding the task at hand.

Once realistic benchmarks for the output of your ML project are set up front and the first model prototype is ready, carefully analyze what is missing to get to this benchmark. A quick analysis of the errors of your model, evaluating some human-level performance benchmarks, and digging into the potential gaps can guide you on whether it’s worth to continue training and optimizing your model or whether it’s better to spend more time on collecting additional data, better labelling or feature creation. Iterate.

What will help in moving through these phases effectively is a data-centric infrastructure for the ML solution. What you need here is an automated retraining and deployment process and integrated model monitoring that can quickly bring the feedback for your model and the new training data increments to trigger model retraining or reworking. For this purpose, the project requires a developed MLOps infrastructure providing timely and consistent, high-quality data for your system. Tools and expertise for building full MLOps pipelines are quickly piling up to meet the new requirements and demand in the field of production ML.

Prioritize data quality over data quantity. Prioritizing tasks on creating and maintaining systematic, high-quality data for your business would unlock the potential for better analytics and better ML solutions for your organization. Instead of investing in creating models for the multiple use cases, you want to address, put your data in the centre of your decision-making and build the data infrastructure that would allow you to create cutting-edge ML solutions to reach the quality necessary to make the ML investment profitable and protect your solutions from potentially hard to fix or costly deteriorations in performance.

And know that you are not alone in this. Andrew Ng is on a quest for higher data awareness and more and more useful content on the topic can be found on Data-Centric AI Resource Hub.

The data should show the way


The data-centric approach isn’t anything new. The applied Data Scientists and ML practitioners would always know that the data is the guiding light, the main ingredient for their recipes. What the data-centric approach emphasizes is that the marginal product of data-quality-related activities in many applications might be higher than in the model-related investment.

Let your data show you the way and allow a gradual shift from a model-centric to a data-centric mindset to help you rethink how ML projects are formulated and implemented.

Do you need a partner in navigating through times of change?


At Wiser, we specialize in delivering success and will be happy to accompany you through your data science and analytics journey, all the way into the stratosphere. Learn all you need to know about data science or just book a consultation with our team of experts to start your data science journey efficiently, with the right team on your side.

Insurance Agency Management System for NowCerts

Client at a glance

14 yearsof operation
1,500+Insurance agencies globally use NowCerts
over 25%Year-over-year growth

Transforming Insurance Management with Scalable and Reliable Solutions

Through infrastructure modernization, advanced data processing, and custom solutions, NowCerts’ Insurance Agency Management System is now equipped to handle a growing client base with improved stability, performance, and security. Our partnership empowered NowCerts to exceed customer expectations and deliver a seamless user experience in the competitive insurance industry.

Challenge

NowCerts, a leading provider of an Insurance Agency Management System (AMS), faced challenges scaling its platform to meet the needs of a rapidly growing client base. The system struggled with:

  • Performance Issues: Slow response times, unresponsiveness, and frequent errors caused by timeouts and bugs.
  • Stability and Reliability: Weak system architecture unable to handle increasing workloads.
  • Security Risks: Initially designed for internal use, the platform lacked robust security features.
  • Data Processing Bottlenecks: Inefficient data layer operations hindered system scalability.

NowCerts sought a partner to overhaul its system infrastructure, improve functionality, and ensure 24/7 online availability.

Our Approach

We collaborated with NowCerts to analyze the existing platform, identify pain points, and implement a series of targeted improvements.

Key actions included:

  1. Infrastructure Modernization:
    • Enhanced system architecture by deploying web farms and additional servers to improve scalability and high availability.
    • Built a high-availability system to ensure consistent performance and reliability.
  2. Optimized Data Processing:
    • Developed advanced data processing algorithms using MSSQL to enhance system efficiency.
    • Designed and implemented ETL processes using SSIS to streamline data imports from external systems.
  3. Custom Solutions and Reporting:
    • Built a comprehensive reporting system using MS Reporting Services, enabling clients to gain actionable business insights.
    • Developed custom solutions, including SQL server jobs, stored procedures, and SQL functions, to improve functionality and performance.
  4. Strengthened Security:
    • Addressed security gaps by enhancing system protections to align with industry best practices.

Impact Delivered

  • Improved System Performance: Achieved faster response times and greater stability through infrastructure upgrades.
  • 24/7 Online Availability: Ensured uninterrupted access for users, enhancing the customer experience.
  • Enhanced Scalability: Built a system capable of handling a rapidly expanding client base without sacrificing performance.
  • Efficient Data Management: Streamlined data processing and reporting, providing clients with real-time insights into their operations.

Expertise and Scope

  • Deliverables: Upgraded system architecture, reporting system, custom SQL solutions, ETL processes
  • Technology Stack: .NET Framework, C#, Web Forms, WinForms, LINQ to SQL, Web Services, REST Services, Windows Services, JQuery, Bootstrap, MSSQL Server, SSRS, IIS