Top 5 Learning and Development Trends 2023

Do you know what to expect in 2023?
If you are still writing your strategy we recommend to include relevant learning and development trends in it.
You don’t need to follow all these trends but as you look through the list, focus on just one of them.

Which one would make the biggest positive impact in your organization?
Let’s get ahead of the curve and take a peek into the latest learning trends:

  1. Virtual and augmented reality for immersive learning experiences.
  2. Increased use of artificial intelligence and machine learning in learning and development.
  3. Personalization of learning, with a focus on individual learning styles and needs.
  4. Use of microlearning and bite-sized content for on-the-go learning.
  5. Greater emphasis on social and emotional learning to support overall well-being and workplace effectiveness.

Trends come and go! But your own app can become a long lasting brand story. Make it a strong, creative and unique one.

Let’s discuss how to turn your ideas into your own brand app.

Digital Transformation Project for an Insurance Company

Over the years IT Creative Labs has implemented many different projects of various complexity and across many different niches. Our development team comprises senior professionals only and today we asked our lead backend engineer Vlad to share some insights on his work using one of the projects as an example.

The insurance company wanted to semi-automate the process of reviewing insurance claims, track statistics and semi-automate the process of filling out the forms, should the claim be approved for coverage. In order to achieve that the insurance company needed to automate the processing of user documents and to be able to obtain structured user data as a result of that processing. This automated process also needed to be integrated with the existing systems that the insurance company was using already. The first thing we did with the client is break down their requirements into smaller tasks, identifying key objectives.

Useful side note from Vlad: Any task, even the most difficult one, can be broken down into smaller sub-tasks. This concept is called Microproductivity.

Here is the breakdown Vlad & the team proposed:

Task 1

Problem: Documents were rotated differently, some were flipped upside down

Solution: Created an automated process of proper rotation for all the documents. A simple CV (Computer Vision) algorithm was used for that.

Task 2

Problem: Sometimes a scan of a document can be the size of A4 and the document itself is the size of an ID card.

Solution: Implemented a fixed-scale document template by cropping along the edge of the document and removing white space.

Task 3

Problem: Ability to identify the type of documents to be able to extract relevant fields from them.

Solution: A classifier model was implemented based on ML (machine learning), where the convolution neural network was trained on a high volume of documents and error validation (backpropagation).

Task 4

Problem: Different types of documents, a lot of older document types, hand-written documentation.

Solution: Implemented text recognition from image. A classic recognition model for Latin language was implemented.

Task 5

Problem: Extract recognized information and structure it properly mapping it to the correct fields, so it is presented in the cohesive and standardized manner.

Solution: Create an automated machine learning model, trained on specific types of documents,  to extract information from the document and fill in the fields. When the fields were empty, we used an additional filling algorithm for placing all the appropriately identified elements on the coordinate grid.

In Vlad’s own words: “Everything was simple: just several models, algorithms, training and the task was done.”

For this project the following techstack was used: OpenCV, Python, Tensor Flow, Keras. React for frontend and Flask for backend, PostgreSQL for data storage.

As a result of creating this semi-automated machine learning-based process and integrating it with the rest of the existing systems, the insurance company was able to significantly cut down processing time and human error while being able to process larger volumes without increasing its staff count.

If you have a project in mind that you’d like to chat about – reach out!

What is MVVM?

Model–view–viewmodel (MVVM) is a software architectural pattern that facilitates the separation of the development of the graphical user interface (the view) – be it via a markup language or GUI code – from the development of the business logic or back-end logic (the model) so that the view is not dependent on any specific model platform.

The viewmodel of MVVM is a value converter, meaning the viewmodel is responsible for exposing (converting) the data objects from the model in such a way that objects are easily managed and presented. In this respect, the viewmodel is more model than view, and handles most if not all of the view’s display logic. The viewmodel may implement a mediator pattern, organizing access to the back-end logic around the set of use cases supported by the view.

MVVM was invented by Microsoft architects Ken Cooper and Ted Peters specifically to simplify event-driven programming of user interfaces. The pattern was incorporated into Windows Presentation Foundation (WPF) (Microsoft’s .NET graphics system) and Silverlight (WPF’s Internet application derivative).

Like many other design patterns, MVVM helps organize code and break programs into modules to make development, updating and reuse of code simpler and faster. The pattern is often used in Windows and web graphics presentation software.

The separation of the code in MVVM is divided into View, ViewModel and Model:

  • View is the collection of visible elements, which also receives user input. This includes user interfaces (UI), animations and text. The content of View is not interacted with directly to change what is presented.
  • ViewModel is located between the View and Model layers. This is where the controls for interacting with View are housed, while binding is used to connect the UI elements in View to the controls in ViewModel.
  • Model houses the logic for the program, which is retrieved by the ViewModel upon its own receipt of input from the user through View.

MVVM is a powerful architectural pattern that has gained immense popularity in recent years due to its numerous advantages. However, as with any design pattern, it also has its drawbacks. To make an informed decision on whether to use MVVM for your application, it’s important to understand its key features, as well as its advantages and disadvantages. So, let’s explore them in more detail.

Features

MVVM separates the different concerns of an application, making it easier to maintain and scale. Let’s take a closer look at the key features of MVVM and how they can be improved:

  1. Life Cycle State

    One of the key benefits of MVVM is that it helps maintain the life cycle state of an application. The ViewModel can store and manage the application state, allowing the application to resume where the user left off. To improve this feature, we can use the Android Architecture Components like ViewModel and LiveData to persist data and manage the application state effectively.

  2. UI and Business Logic Separation

    MVVM keeps UI components away from the business logic, making the code more modular and maintainable. To further improve this feature, we can use Data Binding to simplify the code and reduce boilerplate. By using Data Binding, we can bind UI components directly to ViewModel properties, reducing the amount of code required to update the UI.

  3. Business Logic and Database Operations

    MVVM keeps the business logic separate from the database operations. This separation of concerns makes the code more testable and maintainable. To improve this feature, we can use the Repository pattern to further decouple the ViewModel from the database. The Repository acts as a mediator between the ViewModel and the database, providing a simple and consistent interface to perform database operations.

  4. Easy to Understand and Read

    MVVM is designed to be easy to understand and read. The ViewModel acts as a mediator between the View and the Model, making it easier to reason about the code. To further improve this feature, we can use the SOLID principles to keep the code clean and maintainable. By following SOLID principles like Single Responsibility and Dependency Inversion, we can create code that is easy to understand and maintain.

Now that we have explored the key features of MVVM and how they can be improved, let’s take a closer look at the advantages and disadvantages of this architectural pattern.

Advantages

  • Maintainability

    The Model-View-ViewModel (MVVM) architecture pattern has become a popular choice for building software applications, and for good reasons. One of the key advantages of MVVM is its maintainability, which allows developers to remain agile and continuously release successive versions quickly. This is due to the clear separation of concerns within the architecture, making it easier to modify and update the codebase without affecting other parts of the application.

  • Extensibility

    Another benefit of MVVM is its extensibility. The architecture enables developers to add new pieces of code or replace existing ones without requiring significant modifications to the overall system. This makes it easier to scale and evolve the application over time, adapting to new requirements and changes in the market.

  • Testability

    Moreover, MVVM promotes testability by separating the business logic from the view layer, making it easier to write unit tests against the core logic. This not only improves the overall quality of the codebase but also reduces the likelihood of introducing new bugs during the development process.

  • Transparent Communication

    Finally, the transparent communication between the layers of an application is another advantage of MVVM. The view model provides a clear and concise interface to the view controller, which populates the view layer and interacts with the model layer. This results in a transparent and seamless communication between the different layers of the application, making it easier to understand and maintain the codebase.

In conclusion, the advantages of MVVM make it a great choice for developers who want to build scalable, maintainable and extensible software applications. Its clear separation of concerns, testability and transparent communication between layers, make it a powerful tool for building high-quality software applications that can adapt to changes in the market and evolving business requirements.

Disadvantages

Like any software architecture pattern, MVVM also has some disadvantages that developers should consider before adopting it. Here are a few of them:

  1. Learning curve: MVVM can have a steep learning curve for developers who are new to the pattern, which can lead to longer development times and potential mistakes during implementation.
  2. Increased complexity: While MVVM promotes separation of concerns, it can also increase the complexity of the application due to the added layers of abstraction. This can make it harder to debug and maintain the codebase.
  3. Overkill for simple UIs: For simple UIs, MVVM can be considered overkill, and using a simpler pattern or approach may be more appropriate.
  4. Designing the ViewModel: In larger applications, designing the ViewModel layer can be challenging, as it needs to handle multiple use cases and be flexible enough to accommodate changes in the future.
  5. Debugging complex data bindings: MVVM relies heavily on data binding, which can make debugging more difficult, especially when dealing with complex data bindings.

Despite these disadvantages, MVVM remains a popular and powerful architecture pattern for building software applications. Developers should weigh the pros and cons carefully and choose the architecture pattern that best fits their specific use case and project requirements.

What is CICD and why is it so Popular?

What is CI/CD?

CI/CD is a set of practices that automate the building, testing, and deployment stages of software development. Automation reduces delivery timelines and increases reliability across the development life cycle.

Most modern applications require developing code using a variety of platforms and tools, so teams need a consistent mechanism to integrate and validate changes. Continuous integration establishes an automated way to build, package, and test their applications. Having a consistent integration process encourages developers to commit code changes more frequently, focus on meeting business requirements, code quality, and security, which leads to better collaboration and code quality.

Continuous integration and continuous delivery are two distinct processes in CI/CD and have different purposes:

  • CI runs automated build-and-test steps to ensure that code changes reliably merge into the central repository.
  • CD provides a quick and seamless method of delivering the code to end-users.

So the main goal of CI/CD is to help developers ship software with speed and efficiency. The team continuously delivers code into production, running an ongoing flow of new features and bug fixes.

The most popular CI/CD tools

A CI/CD tool helps DevOps teams create a pipeline and automate integration, deployment, and testing stages. Some tools specifically handle the integration (CI) side, some manage development and deployment (CD), while others specialize in continuous testing or related functions.

Here is a list of the most popular CI/CD tools you can choose from:

  • Jenkins: An automation server that can handle anything from simple CI to a complex CI/CD pipeline.
  • TeamCity: A CI server that helps build and deploy projects with reusable settings and configurations.
  • Spinnaker: An open-source CD platform ideal for multi-cloud environments.
  • GoCD: A CI/CD server that emphasizes modeling and visualization.
  • CircleCI: A flexible, cloud-based CI/CD tool perfect for smaller projects.
  • Travis CI: A Ruby-based tool with a robust build matrix.
  • Bamboo: A CI server with support for several top stacks (Docker, AWS, Amazon S3, Git, CodeDeploy, Mercurial) and up to a hundred remote build agents.

CI/CD enables more frequent code deployment.

So, let’s sum up

CI packages, tests builds, and notifies developers if something goes wrong. The CD automatically deploys applications and performs additional tests.

CI/CD pipelines are designed for organizations that need to make frequent changes to applications with a reliable delivery process. In addition to build standardization, test development, and deployment automation, we get a holistic production process for deploying code changes. The introduction of CI/CD allows developers to focus on improving applications and not spend effort on deploying it.

CI/CD is one of the DevOps practices, as it aims to combat the tension between developers who want to make frequent changes and operations that require stability. With automation, developers can make changes more frequently, and operations teams, in turn, gain greater stability because environment configuration is standardized and continuous testing is carried out during delivery. Also, the setting of environment variables is separated from the application and there are automated rollback procedures.

However, CI/CD is just one of the processes that can contribute to improvements. There are other conditions for increasing the frequency of delivery.

To get started with CI/CD, the development and operations teams need to decide on technologies, practices, and priorities. Teams need to build consensus on the right approaches for their business and technology so that once CI/CD is implemented, the team consistently adheres to the chosen practices.

Everything You Need to Know About Docker & Docker Compose to Get Started

What is Docker Compose?

Docker is known for its use of OS-level virtualization and for the container system that employs to make creating, deploying and running applications much easier for developers.

While learning the basics of Docker, you may have come across the creation of simple applications that work autonomously, not depending, for example, on external data sources or on certain services. In practice, such applications are rare. Real projects usually involve a whole set of collaborative applications.

Docker Compose technology, if we describe it in a simplified way, allows, with the help of one command, to start many services.

So, Docker Compose is software tool used for defining and running multi-container Docker applications.

Difference Between Docker and Docker Compose

Docker is used to manage the individual containers (services) that make up an application.

Docker Compose is used to manage multiple containers that are part of an application at the same time. This tool offers the same features as Docker, but allows you to work with more complex applications.

Docker Compose Use Cases

  • Automated testing environments.

An important part of any Deployment or Integration process is the automated test suite.

Compose supports automated testing, which is an essential part of CI/CD and provides a convenient and easy way to create and destroy isolated testing environments for your testing. Developers can define and configure the environment needed for running automated end-to-end testing  in just a few commands using the appropriate Docker Compose file.

  • Single host deployments.

In Docker Compose, containers are designed to run on a single host as they have traditionally been focused on development and testing workflows.

  • Development Environments.

Compose is a fast and simple way of starting projects as it can quickly spin up new isolated development environments. The software documents and configures all the application’s service dependencies (including databases, caches, web service APIs, etc.). It allows you to create and start one or multiple containers for each dependency using a single command.

  • Release notes.

You can see a detailed list of changes for past and current releases of Docker Compose, refer to the Changelog.

What features make Docker Compose so effective?

  • Multiple isolated environments on a single host
  • Preserve volume data when containers are created
  • Only recreate containers that have changed
  • Variables and moving a composition between environments

We have covered the basics of working with Docker Compose technology, the knowledge of which will allow you to use this technology and, if you desire, begin to study it in more depth.

Do you use Docker Compose in your projects?

Overwhelmed by this content? Reach out with your next big idea and we’ll take care of all the technical details so you can focus on the bigger picture.

34 Web3 Terms You Should Know

Level up your Web3 vocab with these keywords by IT creative Labs.

Our vocabulary is designed to help you navigate web3’s foundational concepts.

First of all,

Web 3.0 or Web3

The next generation of internet in which the web is a decentralized online ecosystem, built on the blockchain.

Airdrop

An airdrop is an unsolicited distribution of a cryptocurrency token or coin, usually for free, to numerous wallet addresses.

Altcoin

Altcoin simply means each and every cryptocurrency other than bitcoin. Altcoin comes from an “alternative coin”, referring to any new cryptocurrency with a relatively small market cap.

BTC (Bitcoin)

The very first decentralized digital currency that can be transferred on the peer-to-peer bitcoin network.

Block

A batch of transactions written to the blockchain. Every block contains information about the previous block, thus, chaining them together.

Blockchain

Blockchain is a publicly-accessible digital ledger used to store and transfer information without the need for a central authority. Blockchains are the core technology on which cryptocurrency protocols are built.

Bridge

A protocol allows separate blockchains to interact with one another, enabling the transfer of data, tokens, and other information between systems.

Cold Wallet

A physical device use to store cryptocurrencies. Cold wallets can be hardware devices or simply sheets of paper containing a user’s private keys. Because cold wallets are not connected to the internet, meaning they’re offline, they are generally a safer method of storing cryptocurrencies.

Consensus

The state of agreement amongst the nodes on a blockchain. Reaching consensus is necessary for new transactions to be verified and new blocks to be added to the blockchain.

Cryptocurrency

Cryptocurrency is the native asset of a Blockchain like Bitcoin or Ethereum. All coins are basically a token, also known as protocol tokens.

Dapp (Decentralized Application)

An application built on open-source code that lives on the blockchain. Dapps exist independent of centralized groups or figures and often incentivize users to maintain them through rewarded tokens

DeFi (Decentralized finance)

Decentralized finance (DeFi) is an emerging financial technology based on blockchain. The system removes the control banks and institutions have on financial services, assets and money.

DEX (Decentralized Exchange)

DEX is a peer-to-peer cryptocurrency exchange built on the blockchain. A DEX is run by its users and smart contracts instead of an intermediary figure or centralized institution.

ETH/Ether

ETH/Ether is the native cryptocurrency of the platform ethereum. Ethereum is a a decentralized ledger technology (Blockchain). Second to Bitcoin, Ether is the next most popular cryptocurrency.

Fiat

A currency established as legal tender, often backed and regulated by the government.

Floor

The current lowest price available to acquire an NFT in a collection.

Fork

A change to a blockchain protocol. When the changes are more fundamental, it may be result in a hard fork, leading to the formation of a separate chain with different rules. When the changes are minor, the results are in a soft fork.

Fractionalization

The process of locking an NFT into a smart contract, and then dividing it into smaller parts which are issued as fungible tokens. It lowers the price of ownership and allows artwork and other digital assets be owned by a community.

Gas

Gas refers to the fee, required to successfully conduct a transaction or execute a contract on the Ethereum blockchain.

Hashing

The process of taking data and creating a completely unique hash value. This hash value now acts as an identifier you can reference to retrieve the original data. This means that no matter how complex or large the data was, you can now easily identify this information by referencing its hash value.

Hot Wallet

A cryptocurrency wallet that is always connected to the internet and cryptocurrency network. Used to send and receive cryptocurrency, allow you to view how many tokens you have available to use.

Liquidity

A measure of how easily an asset can be bought, sold, or traded in a given market or on an exchange.

Mining

This is the process of verifying transactions, organizing them into blocks, and then adding blocks to the blockchain. It’s like bitcoin mining will add fresh new coins and give them to the node that was mining that block.

Minting

The process of adding a transaction or block to a blockchain. The term is commonly used to express someone putting up an NFT on an exchange.

NFT

NFT stands for non-fungible token. NFTs represent a digital asset on the blockchain which are unique and represents ownership by someone.

Non-fungible

Means that it is completely unique.

Peer to peer (P2P)

Something where two decentralized individuals interact directly with each other, without intermediation by a third party.

POAP

Stands for ‘proof of attendance protocol’. This is an NFT that is used to signify an event or certain moment in time.

PoS (Proof of Stake)

A consensus mechanism that requires nodes, called validators, to stake a set amount of cryptocurrency on the blockchain in order to verify transactions and mint blocks.

PoW (Proof of Work)

A consensus mechanism that requires miners to complete complex mathematical puzzles in order to verify transactions and mint blocks. When a miner correctly solves a puzzle, they gain access to mint the next block and receive the corresponding block reward and transaction fees.

Smart contract

Self-executing code deployed on a blockchain that allows transactions be made without an intermediary figure and without the parties involved having to trust one another.

Stablecoin

Cryptocurrencies where the price is designed to be pegged to a cryptocurrency, fiat money, or exchange-traded commodities. It stays stable.

Token

Means that it can be transferred on a blockchain. Created by platforms and applications that are built on an existing blockchain.

Wallet Address

Similar to a bank account number. Your wallet address is a unique string of numbers and letters (also called a public key) that people can use to send you cryptocurrency. But only you can access your wallet’s contents by using the corresponding private key.

Web3 is growing rapidly. So key WEB3 terminology can give you the opportunity to understand better the conversations around the evolution of the Internet and stay on top of the game.

Minimal Node.js Development Environment Using Docker Compose

Quick and painless setup of a basic local development environment for Node.js using docker-compose.

This is a quick tutorial on how to get a Node.js Docker container up and running for local development.

This approach does NOT require a Dockerfile and solves infamous insidious server response issues.  No more “localhost didn’t send any data” or “ERR_EMPTY_RESPONSE” or “127.0.0.1 didn’t send any data”.

You will need Docker Community Edition installed and running (aka Docker desktop) and exactly two files to fire up a Node.js app – “docker-compose.yml” and “app.js”.

local development environment for Node.js using docker-compose

docker-compose.yml

version: "3"
services:
    app:
        image: node:alpine
        volumes:
          - .:/app
        working_dir: /app
        ports:
          - 80:80
        command: node app.js

app.js

const http = require('http');
const hostname = '0.0.0.0';
const port = 80;

const server = http.createServer((req, res) => {
    res.statusCode = 200;
    res.setHeader('Content-Type', 'text/plain');
    res.end('Live long and prosper!\n');
});

server.listen(port, hostname, () => {
    console.log(`Server running at http://${hostname}:${port}/`);
});

Then navigate to the newly-created folder using the console and type: “docker-compose up -d”

That’s all.
Now you can open your browser and access your app at “http://localhost

Отладка. Будет убрана потом