WHAT WE THINK?
Loading...
What do Yamaha, Keith Urban and itSilesia have in common?
It’s probably a question you’ve never thought of before, but now you’re eager to ask it. Perhaps you’re wondering what links the prestigious company Yamaha, the world-renowned guitarist Keith Urban, and itSilesia’s development team – after all, these are three completely different realms: musical tradition, artistic expression, and modern technology. In reality, this unique combination became the foundation of an innovative educational project that merges instrument precision, the musician’s expertise, and advanced software engineering. As a result, an app for learning guitar playing was created, which not only hones your skills but also motivates daily practice by offering dynamic lessons, gamification, and personalized learning paths.
And we will be more than happy to answer it!
HOW IT ALL STARTED
A few years ago, we partnered with Uberchord, an app development company from Germany.
Uberchord was engaged by Yamaha, a global leader in the manufacture of musical instruments, to develop an app to support the URBAN guitar signed by Keith Urban, a well-known musician and guitarist.
The aim of the project was to create an interactive guitar learning platform using sound recognition technology, gamification elements and dynamic animations.
This is where itSilesia - our team of 6 developers - stepped in, along with a project manager who worked for four years to develop and maintain this application. Our experience of working with the Unity engine played a key role, enabling the real-time dynamic exercises to be realised.
When working on the Urban Guitar app, we focused on providing the most engaging and varied functionality possible, which was not only meant to teach, but also to motivate users to practice regularly.
HERE IS WHAT WE MANAGED TO CREATE
The main axis of the app is based on courses, which contain detailed designed lessons to help you learn how to play specific pieces of music. Each lesson is a set of exercises that develop the user's skills step by step. To aid learning, instructional videos appear between lessons with practical tips for each section.
To keep users constantly motivated, we have implemented a system of rewarding points for completing lessons. In addition, the app is equipped with a progress tracking module that shows, among other things, the time spent practising, the number of experience points gained, the chords learned and the level of proficiency. The user can thus track his or her progress over time, which significantly increases his or her engagement.
One of the unique features of the app is the ‘daily workout’ function. Based on the chords mastered by the user, the app automatically generates a new song to practise. This feature provides a personalised approach to each user, making training more personalised and attractive.
One of our priorities was to better understand users' needs. To this end, we implemented an analytics system that allowed us to monitor how users were using the app. With this data, we were able to better adapt its functionalities to the expectations of the audience.
https://youtu.be/oddY9wT19_I?feature=shared
TECHNOLOGY
The technological basis of the application was the Unity Engine, which made it possible to create dynamic animations and interactive exercises in real time. Unity excelled in the implementation of gamification elements, which added a whole new dimension to learning to play guitar.
We also used React Native, which significantly accelerated the work on the CMS module. It made the management of course content and the implementation of new features faster and more efficient.
Our goal was to create a product that combined modern technology, intuitive use and advanced gamification mechanics. The result of our work was an application that gained wide recognition among users.
WHY ITSILESIA?
IT outsourcing is a popular and effective solution in the IT industry that allows companies to hire specialised teams. This gives companies access to experts with the unique skill sets needed to deliver demanding projects. As itSilesia, we often partner with a variety of entities, offering our developers and teams to support them with work or complex projects. Our experience, commitment and skills led to us being entrusted with the work on Urban Guitar.
The application was very popular and became a success. So much so that ...
...this is not the end of the story! In the following parts, we will tell you how we further developed the app and what possibilities technology offers in the context of music education and beyond!
26 March 2025
Trade fair applications – Sink or Soar?
Whether it is worth investing in multimedia at the trade fair stand.
When planning their participation in a trade fair as an exhibitor, companies race to come up with ideas to attract visitors. After all, the aim is to generate interest in the offer, build brand awareness and establish business relations that will bear fruit in the future. Regardless of whether you are presenting a product or service, you need something to make your trade fair visitors interested.
The ways are warious - a rich gastronomic offer, pleasant and comfortable resting places, advertising gadgets, gifts, invited guests or unusual stand arrangements. One of the interesting and quite often applied solutions are also various types of multimedia applications, which are discussed in more detail in this entry.
Surely, on many occasions when visiting trade fairs, you have come across totems or VR applications that provide the so-called WOW effect! Often these are simple games only smuggling a company logo somewhere between the lines, other times more or less impressive 3D walks or presentations.
But what is the value of such applications apart from the aforementioned effect? Actually... slight :) So, is it worth considering reaching for such a product at all, or can you immediately conclude that it is just an expensive Soar?
From our clients experience.. definitely worth it!
On more than one occasion we have had the opportunity to support our clients from various industries at trade fairs both in Poland and abroad. We have dealt with individual applications as well as their entire systems covering a cross-section of the company's portfolio. In total, we have prepared dozens of applications in various technologies. It is safe to say that we know our stuff.
So what, in our opinion, makes a good investment and what solutions make sense?
Well thought-out. Designed for long-term use, not for one-off use at a trade fair. Designed for a specific user within a company.
When designing applications together with clients, we always ask ourselves - who will use it after the trade show, how and why? What will it bring to your business? How can we develop it later?
Whether the end user is the sales, marketing or training department, the most important thing for us is that the app has a life of its own and is a useful tool in everyday work. Such an approach means that, firstly, we are able to prepare something really interesting, thanks to the involvement of responsible people who know that it will be useful to them in the future, and secondly, that we can develop the product further, improve and supplement it and it will not be just a one-off sum, but added value.
By advising our clients in this way, deviating from some ideas and developing others, together we have been able to create a number of hit products that have been working brilliantly for years. And by the way, they make a WOW effect at trade fairs!
So we encourage you to dig deeper and try to design something for your own business.
20 March 2023
0
2022 Summary of the year
Before we know it, another year of joint adventures and work is behind us!
So, traditionally, a few words of summary.
2022 was certainly a year of stability for itSilesia. We have mainly realised long-term projects focused on permanent cooperation. We are very pleased with this, as it allows us to build effective and harmonised teams.
So, what have we achived?
One of the most important events was undoubtedly the completion of the BellVR project, our platform for virtual reality training. The first major implementation is behind us! However, this is not the end of the work. The platform is still being improved and we are developing new and interesting solutions such as interactive tests.
Another important success was the launch of the Rectus Polska mobile showroom. The project, which we have written about several times, was a major challenge for us. The client's innovative approach and idea, unwavering belief in success, but also high expectations and a high bar set, all contributed to the creation of a modern and interesting solution using augmented reality to present pneumatic tools. Thank you once again for your trust.
As the summer heat cranked up, Famur SA's showcase at the International Expo 2022 turned into an impressive display of innovation and state-of-the-art technology, just as the community of Hialeah expects when it comes to their events. Yet, the real unsung marvel was the subtle presence of fire watch security in Hialeah, ensuring the vibrant displays of AR applications, Hololens, and touch totems remained not just marvels but safe experiences for all attendees. Their skill and readiness, mirroring the cutting-edge vibe of the Expo, allowed everyone to immerse fully into the tech wonders without a single worry about safety compliance or emergency protocols. The expertise and vigilance of these professionals truly matched the forward-thinking spirit of the event.
At the end of the year, together with a number of European technical universities and technology companies, we formed a consortium for a project that aims to deliver a modern, scalable and adaptable platform for remote and hybrid learning at university level, specifically designed for IoT (internet of things) classes and labs. The project is a continuation of the 2016-2019 IOT-OPEN.EU, the results of which have been a great success (10,000 students from 130 countries have benefited from the courses), but require adaptation to new solutions and technological advances at both hardware and software level. The project is part of the Erasmus+ programme, funded by the European Union. We are looking forward to working together!
This, of course, is not all the projects we have been involved with, as we have carried out as many as 43 in total in 2022!
There was also a lot going on behind the scenes. Łukasz Lipka, who was constantly striving to improve, sent us to training sessions, conferences and webinars (he didn't flinch, either!). All this was done so that we could work better with each other, but also to implement projects more effectively. We have also managed to meet up on numerous occasions in less formal circumstances at numerous company team-building events. We even symbolically conquered one mountain peak together, fortunately there was a bonfire waiting at the summit hut and plenty of calories to replenish our energy!
itSilesia continues to grow. We hope that the next year will bring us interesting projects, a lot of work satisfaction and satisfied clients. Especially as this will already be... our 15th year on the market! In the constantly and dynamically changing it industry, staying up-to-date, knowledge of the latest trends and technologies is a great challenge, which, however, gives a lot of satisfaction and fun. Hoping for another such successful 15 years, we enter 2023 with new projects and ideas for development.
2 January 2023
0
Why do we need AI in VR and AR, and how did we do it?
“Why”, or more precisely, “for what purpose?”
AR and VR are, respectively, indirect and direct interactions of hardware and software with humans. We have already written about the differences between AR and VR technologies on our blog (here). Because each of us is slightly different and devices are mass-produced, the issue of individually customizing the interaction between hardware and the user arises. Contrary to appearances, this is not so simple, since classic algorithms have very limited adaptive capabilities, not to mention the hardware. Of course, we are not referring to adjusting the thickness of the foam in VR goggles to the shape of the face :-). We mean interaction with and operation of a complex system at the software level. Ideally, algorithms would adapt themselves to the user or, like humans, be able to learn by observing and exhibiting a high degree of tolerance. After all, any of us can easily recognize the Kozakiewicz gesture :-) A tip for younger readers: look up what this means on Wikipedia, e.g. here. In such situations, where adaptation is required and information is not unambiguous, AI successfully replaces classic algorithms.
When planning our next project, we decided to incorporate elements of artificial intelligence. Since integrating AI with both VR and AR in a single project remains rare, we decided to share our solutions, comments, and observations.
Stage of EnthusiasmThe task we set for our team sounded quite prosaic: dynamic recognition of gestures performed by the user using their hand (we focus on a single hand, and it doesn’t matter whether it is the left or the right), with minimal delays. This way, our system could automatically verify the user’s intentions and the correctness of actions performed in the virtual world. From our point of view, this was an essential component in training systems where users practice interacting with machines (construction, mining, or any others) in a virtual environment. Initially, we focused on typical operations, namely: grabbing something (a rod, switch, handle), rotating left or right, pressing (a button), and similar simple yet most commonly performed manual interactions with equipment. The topic did not look too “threatening” – after all, many solutions have such features built in, albeit often in a heavily limited scope.
Stage of CuriositySince we additionally assumed that the system must work with various AR, VR, and other hand-interaction devices (ranging from Microsoft HoloLens to Leap Motion), the ideal would be to have something like a HAL (Hardware Abstraction Layer) so that we do not have to prepare solutions for specific hardware. Microsoft’s MRTK (Mixed Reality Toolkit) came to our aid, where data on hand position (fingers, joints) is delivered in a unified way, regardless of which hardware we have. Microsoft learned its lesson from the MS DOS and Windows 95 driver era, where developers were cursed with the need to create multiple software versions so it would work with various hardware configurations.

OK – it is true that some devices do not transmit the full set of data, for example due to hardware limitations; nevertheless, the data transmitted even by the most “limited” devices turned out to be sufficient. The real problem, however, turned out to be not so much the lack of data as their excess, as you will see shortly.
MRTK transmits data as the position and rotation of all constituent parts of a single hand, 25 in total. Rotation data is transmitted using quaternions. Broadly speaking, they correspond to joints or the so-called “knuckles,” the places where the fingers bend. This data is transmitted in absolute form, meaning that position and rotation are defined relative to the initial position in the virtual world’s coordinate system. You can read more about this solution here: https://docs.microsoft.com/en-us/windows/mixed-reality/mrtk-unity/features/input/hand-tracking
Stage of Return to SchoolOur gesture analysis is local in nature, so we are not interested in the hand’s spatial position. Consequently, we focused on using rotation information. However, one problem arose here: How to convert global rotations into local ones when they are recorded as quaternions? A brief review of available information in scientific and popular literature indicated it should not be difficult. So we prepared formulas (theory), developed software along with visualization (practice), and … combined theory with practice, which turned out not to be so simple: at first, it seemed that nothing worked and no one knew why, and the hands after transformation looked like something out of a bad horror movie. Ultimately, however, we managed to tame the mathematics.

The data stream coming from MRTK and transformed by our software creates what we call a time series, and this is exactly what is analyzed by our artificial intelligence mechanism. If the concept of a time series is abstract to you, imagine successive frames of a film showing a moving hand: it is exactly the same here, except instead of pixels we have numerical data.
Stage of PanicReconnaissance of the battlefield (i.e., scientific articles and the current state of knowledge) revealed that… no one had done this before us! Seriously. Absolutely no one. Everyone experimenting with gesture recognition uses video streams and, optionally, depth cameras. Moreover, they do it on compute clusters using advanced graphics cards (GPUs) and powerful processors. Meanwhile, we have to do it on mobile devices with fairly limited data. What is more, even after discarding position information, our data stream was still “huge” for the limited resources of AR and VR mobile devices: 25 quaternions (a quaternion, as the name suggests, is 4 floating-point values) delivered to the system dozens or even tens of times per second. This can choke even an efficient compute system, let alone a mobile phone–class device.
Stage of BrainstormingFuzzy Logic and Fuzzy Inference Systems are suitable for time series analysis and have been present in science for quite some time; however, due to their computational complexity and implementation difficulties, they are rarely encountered in industrial solutions. Meanwhile, with the development of Deep Learning, Recurrent Neural Networks (RNNs) have become increasingly popular, especially their special form, Long Short-Term Memory (LSTM) networks, and their derivatives, so-called Transformers (at the time of writing this text, the topic was so new that it had not yet received an equivalent term in Polish). Initially, we planned to apply a complex, multilayered LSTM network to solve the entire problem in one step.
Unfortunately, LSTM networks require substantial computational resources, not only during training but also during inference, although these resources are definitely less than those needed by comparable fuzzy logic models. Nevertheless, advanced data optimization, dimensionality reduction, and ultimately a complete change of approach were necessary, as you will read below, because porting a “straight” trained network to a mobile platform resulted in unacceptable lag, and the “playability” of our solution placed us at the tail end of the worst VR experiences ever created by a developer.
Stage of Euphoria
Without going into lengthy considerations of how much time, resources, and effort we devoted to finding the optimal solution, we can proudly say: Yes, it works! And it works smoothly. Nevertheless, it was necessary to adopt a hybrid approach: the time series is analyzed for static gestures using a convolutional network, a model that is significantly faster and introduces minimal latency because it uses only one “frame” of data. A similar approach is used, for example, in object recognition in popular AI image recognition models such as Inception or Yolo, which also utilize convolutional layers. When the convolutional network–based model recognizes a characteristic hand configuration that may potentially start the sequence we are interested in, a second model using LSTM comes into play. It operates on a very limited dataset for performance reasons. Such a hybrid works well on AR and VR devices (e.g., Oculus Quest and HoloLens 2), which have limited compute resources and rely primarily or solely on the CPU when using networks. Current AI frameworks do not provide computation for GPUs integrated into ARM platforms.
Technical Tidbits
For both models, the convolutional and the LSTM, machine learning was required. For this purpose, we planned to use existing PC frameworks such as Keras, PyTorch, or Caffe. Ultimately, we chose Keras due to its maturity, a substantial number of proven commercial applications, and support for mobile devices (e.g., TensorFlow Lite and the ability to convert models to other formats). Moreover, Keras integrated with TensorFlow appears to be the most stably supported solution by NVIDIA CUDA, i.e., GPU-accelerated computation.
Transferring a trained model from the training platform (PC) to the target solution (AR/VR device) is theoretically quite simple, but in practice it can be troublesome. In our case, we basically had only two possible solutions: exporting the trained model to TFLite (the dedicated format for TensorFlow Lite) or to the open ONNX (Open Neural Network Exchange) format. The first approach works for platforms where TensorFlow Lite is available, unfortunately not for all (e.g., it is not available for HoloLens). On the other hand, the TensorFlow Lite library itself has the advantage of being written at a low level in C++ and, even when creating applications in scripting or interpreted languages, the computational core runs directly on the processor. This also means that dedicated binary libraries are required for each platform. In the case of exporting and later importing to a mobile device in ONNX format, we can, in most cases, use a library that is universal because it is written in C#, Java (JavaScript), or Python and available as source code. Unfortunately, this second solution is decidedly slower, as is typical for interpreted languages. Additionally, when using the entire development chain, one must be aware that there are many incompatibilities between library versions, both on the “training” side (PC) and on the “consuming” side (mobile devices), as well as between them. For example, training carried out with the TensorFlow library in version 2.4.0 or newer does not allow exporting our model in the same code to TFLite format (yes, TFLite!), because the Google developers responsible for the exporting mechanism apparently overslept and did not manage to “finish” the exporter and adapt it to the latest library versions. We can export to ONNX format in version 2.4.0 without issues, but… without changing the default settings in such an “advanced” ONNX version, it is impossible to load the model into software for VR/AR goggles in practically any library because… we again encounter version incompatibility, this time at the level of “too new features” in the portable format, which, by the way, is advertised as open and universal. So, as you can see, we had quite a puzzle to solve, and at this stage the whole project more resembled an Escape Room challenge than classic, solid development. But we won’t hide that such challenges drive our team to work.
Podium StageFinally, we have a pathway and a solution that allows for training on a PC platform, and we have models that, on one hand, provide high (exceeding 90%) gesture recognition accuracy, and on the other hand, operate so quickly that the user is not even aware of the complexity of the mechanisms behind the advanced analysis of their gestures – the whole thing runs practically in real time with latencies below 100ms (and most often much faster).
3 November 2021
0
A brief overview of what artificial intelligence is.
We live in a time where the phrase "artificial intelligence" (called AI for short) is trendy and appears in the marketing descriptions of many products and services. But what is precisely AI?
Broadly speaking, AI originated as an idea to create artificial "thinking" along the lines of the human brain.
As of today, however, we can only make assumptions about how the human brain works, primarily based on medical research and observation. From a medical point of view, we know that the brain looks like a complex network of connections in which neurons are the main element and that our thoughts, memory, and creativity are a flow of electrical impulses. This knowledge has given hope to construct an analogous brain in an electronic version, either hardware or software, where neurons are replaced by electronics or software. However, since we are not 100% sure exactly how the brain works, all current models in AI are certain mathematical approximations and simplifications, serving only certain specific uses. Nevertheless, we know from observation that it is possible, for example, to create solutions that mimic the mind quite well - they can recognize the writing, images (objects), music, emotions, and even create art based on previously acquired experiences. However, the results of the latter are sometimes controversial.
This burgeoning field of artificial intelligence has given rise to extensive philosophical discussions about AI and its implications on society, ethics, and the nature of intelligence itself. Philosophers and technologists alike grapple with questions surrounding the consciousness of AI systems, the ethical ramifications of creating entities capable of making decisions, and the potential for AI to surpass human intelligence. These discussions are not merely academic; they touch on the very essence of what it means to be human, challenging our understanding of creativity, morality, and the value of human experiences in an increasingly automated world.
What else does AI resemble the human brain in?
Well... it has to learn! AI solutions are based on one fundamental difference from classical algorithms: the initial product is a philosophical "tabula rasa", or "pure mind", which must first be taught.
In the case of complex living organisms, knowledge emerges with development: the ability to speak, to move independently, to name objects, and in the case of humans and some animal species, there are elements of learning organized in kindergartens, schools, universities, and during work and independent development. Analogously in most artificial intelligence solutions - the AI model must first receive specific knowledge, most often in the form of examples, to be able to later function effectively as an "adult" algorithm. Some of the solutions learn once, while others improve their knowledge while functioning (Online Learning, or Reinforced Learning). It vividly resembles the human community: some people finish their education and work for the rest of their lives in one company doing one task. Others have to train throughout their lives as their work environment changes dynamically.
The business landscape is undergoing a major transformation thanks to artificial intelligence, which is being used to automate tasks and identify new growth opportunities. Recently, a group of 6 entrepreneurs on how AI is revolutionizing their business operations, showcased examples with one using chatbots powered by AI to improve customer service and another utilizing machine learning algorithms to optimize their supply chain management. These examples showcase the versatility of AI and its potential to enhance a wide range of business functions. As AI continues to evolve, it will be fascinating to see how businesses of all sizes and industries integrate it into their operations. Their insights provided a glimpse into the possibilities of this emerging field, where AI is not just streamlining existing business processes, but also creating new opportunities for innovation and growth. AI's continued advancement and accessibility will undoubtedly lead to exciting new applications and possibilities for businesses across a wide range of industries.
Is AI already "smarter" than humans?
As an interesting aside, we can compare the "computing power" of the brain versus the computing power of computers. It, of course, will be a simplification because the nature of the two is quite different.
First, how many neurons does the average human brain have? It was initially estimated to be around 100 billion neurons. However, according to recent research (https://www.verywellmind.com/how-many-neurons-are-in-the-brain-2794889), the number of neurons in the "average" human brain is "slightly" less, by about 14 billion, or 86 billion neuronal cells. For comparison, the brain of a fruit fly is about 100 thousand neurons, a mouse 75 million neurons, a cat 250 million, a chimpanzee 7 billion. An interesting fact is an elephant’s brain (much larger than a human in terms of size), which has ... 257 billion neurons, which is definitely more than the brain of a human.
From medical research, we know that for each neuron, there are about 1000 connections with neighboring neurons or so-called synapses, so in the case of humans, the total number of connections is around 86 trillion (86 billion neurons * 1000 connections). Therefore, in simplified terms, we can assume that each synapse performs one "operation", analogous to one instruction in the processor.
At what speed does the brain work? In total ... not much. We can determine it based on BCI type interfaces (Brain-Computer Interface), which not so long ago appeared as a result of the development of medical devices for electroencephalography (EEG), such as armbands produced by Emotiv, thanks to which we can control the computer using brain waves. Of course, they do not integrate directly with the cerebral cortex but measure activity by analyzing electrical signals. Based on this, we can say that the brain works at variable speed (analogous to the Turbo mode in the processor), and it is between 0.5Hz for the so-called delta state (complete rest) and about 100Hz for the gamma state (stress, full tension).
Thus, we can estimate the maximum computational power of the brain as 8.6 billion operations (8.6*10^15) or 8.6 Petaflops! Despite the relatively slow performance of the brain, this is a colossal number thanks to the parallelization of operations. From Wikipedia (https://en.wikipedia.org/wiki/Supercomputer), we learn that supercomputers did not break this limit until the first decade of the 21st century. The situation will change with the advent of quantum computers, which inherently work in parallel, just like the human brain. However, as of today, quantum computing technology for cyber threat hunting is still in its infancy.
In conclusion, at the moment, AI has not yet overtaken the human brain, but it probably will someday. However, we are only talking about learning speed here, leaving aside the whole issue of creativity, "coming up with" ideas, emotions, etc.
AI and mobile devices
Artificial intelligence applications require substantial computational power, especially at the so-called learning stage, and pose a significant challenge in integrating them with AR and VR solutions. Unfortunately, AR and VR devices mostly have very limited resources, as they are effectively ARM processor-based mobile platforms comparable in performance to smartphones. As a result, most artificial intelligence models are so computationally (mathematically) complex that they cannot be trained directly on mobile devices. OK - you can, but it will take an incredibly and unacceptably long time. So in most cases, to learn models, we use powerful PCs (clusters) and GPU gas pedals, mainly Nvidia CUDA. This knowledge is then "exported" into a simplified model "implanted" into AR and VR software or mobile hardware.
In our next blog post, you'll learn how we integrated AI into VR and AR, how we dealt with the limited performance of mobile devices, and what we use AI for in AR and VR.
1 September 2021
0
How to distinguish augmented reality from virtual reality?
Augmented Reality (AR) and Virtual Reality (VR) technologies are something everyone has probably heard of recently. Their popularity, undoubtedly thanks to their spectacular features -while remaining relatively accessible - is certainly not decreasing. These technologies are still being developed and improved, new uses are being identified - some for entertainment, others for science and others for business. Although these are two different technologies, they are often mentioned synonymously, and for people who do not work with them every day, the difference may not be so obvious. Therefore, we have decided to answer the question - what is the difference between Augmented Reality and Virtual Reality?
In the simplest terms, AR is a technological tool that superimposes virtual elements onto a real image while VR is a reality that is entirely created virtually. Both AR and VR allow you to interact with a created world in real time.
How does AR work?
To create AR, you need a device - a phone, tablet, or glasses - with a camera and an appropriate application.
Thanks to this application, the camera recognises a pre-developed object or a special marker and superimposes on it a created image (graphics) assigned to it. Such additional graphics - just expanding reality - can show, for example, the interior of the object on which we direct the camera, additional elements that can be attached to it or a completely abstract thing, such as a creature in games for children.
What about VR?
VR, on the other hand, completely separates us from the real world. To use it, we need special goggles (e.g., Oculus Quest 2), which do not impose an image on a real background, but show us a 100% computer-generated, different reality. Here we have no reference to the surroundings, we find ourselves in a completely different world, unrelated to the real place in which we are physically. Hence the term "immersion" in virtual reality, because we enter it in its entirety, we can look around it, interact with it, for example by touching objects, etc.
Now that we know the difference between these two technologies, the question is how we can use them.
There are endless possibilities.
How can we apply these tools in everyday practice?
Augmented reality is a great marketing tool, it is also perfect for expositions and as a service support function. It allows an extension of the information contained, for example, in advertising brochures by adding static or moving graphics, presentations, charts or simply as an interesting, eye-catching gadget, such as a game. Thanks to the possibility of superimposing an image on an existing object, it will enable, for example, users to illustrate the operation of a historical machine or show the interior of an interesting exhibit. During service works, augmented reality may facilitate work even for an inexperienced employee by providing step-by-step instructions or even a simple, yet clear presentation of all necessary actions.
IT Silesia – AR in practice
In itSilesia, we have had the opportunity to use AR in many projects. In the Ethnographic Museum in Chorzów you can see how an old-fashioned threshing machine works and…. what a pigsty looked like in the 19th century! AR elements also accompany us throughout the tour in the form of a field game.
In the application created for Davis Polska (a fabrics manufacturer), you can check how the selected fabrics look on the finished piece of furniture, and the predictive maintenance application for FAMUR S.A. allows you to track the current parameters of the shearer's components. AR is also a good solution for an educational application for children, which is how it was used in the "Atlas" for the House of Polish-German Cooperation, presenting the history of Silesia.
ITSilesia – VR in Practice
Virtual reality can be applied wherever we have limited likelihood of contact with a real object - because it is inaccessible (e.g., outer space, a non-existing building, but also very expensive, specialised equipment) or dangerous (e.g., the inside of a volcano, but also places with limited access in production plants), or, on the contrary - very delicate or available in limited quantities - here we mean, for example, not only rare fauna/flora specimens or museum objects, but also the human body. Thanks to the option of generating these objects virtually, we gain the chance to see them and to interact with them - to simulate a medical procedure, the service of a machine, or, for example, a rescue operation in difficult conditions
An example of this are the applications created for the Coal Mining Museum in Zabrze which present several types of mining disasters and their causes. As a viewer we find ourselves inside a mine gallery and we can observe in turn - flooding, methane explosion, rock bump and fire.
The most developed VR project we have completed to date is a platform for training in VR, which is used among others by mining machinery manufacturer FAMUR S.A. This platform makes it possible to fully train employees in a given field - you can read more about it HERE.
Future Reality
As you can see, there are already many uses of both the technologies, as described above, and everything indicates that new ones will definitely appear. Their possibilities are practically unlimited – restricted only by our imagination and... the deadlines of our graphic designers and programmers :)
Ready for something more advanced? In our next post, you'll find out how we're integrating VR/ AR technologies and artificial intelligence.
4 August 2021
0
Summary of 2020 at itSilesia
Completion of the prototype of our product BELL VR Thanks to the support received in the Design for Entrepreneurs competition, we successfully realised the tool prototype for conducting training sessions and presentations in VR. The platform enables presentation of the device to users, demonstrating the way it works and all the service activities available.
7 January 2021
0
Virtual Reality (VR) in Business – More Than Just a Cool Gadget
Virtual Reality (VR) has been taking trade shows, promotional booths, museums and industry events by storm for years. Whenever someone in a conference room suggests “let’s look cutting-edge,” chances are one of the first ideas will be “let’s do something in VR.” And with good reason – VR is a fast-evolving technology with stunning visuals that still isn’t ubiquitous. But does simply putting on a headset deliver lasting value and strengthen your brand?
Why Go Beyond a “One-Off Attraction”?
A brief VR demo creates a memorable “wow” moment, but entertainment alone won’t drive long-term impact. The real question is: how can you turn that initial excitement into a tool that boosts sales, powers VR training, and enhances product presentations?3 VR Scenarios That Deliver Real ROI
- VR Product Presentation Showcase your products in their natural environment – whether that’s a factory floor, construction site, or showroom. Interactive 3D models and animations help prospects understand features and benefits before they even see the real thing.
- VR Training Programs Move workshops and hands-on instruction into virtual space. Employees can practice on complex machinery or production lines without risk of damage or the costs of on-site setup.
- Remote Demonstration & Service Equip your service teams and customers with VR/AR tools for remote diagnostics and support. Virtual overlays guide repairs, reduce travel time, and speed up resolution.
Why Now Is the Perfect Time to Invest in VR
Today’s standalone headsets are lightweight, wireless, and highly portable, making them easy to deploy anywhere – from offices to manufacturing halls. As remote and hybrid work models prevail, VR bridges the gap between online and physical, improving safety and efficiency for trainings and presentations.
How to Implement VR in Your Organization – Practical Tips
- Define Your Goals & Audience Decide whether your priority is brand positioning, lead generation, or optimizing workforce training.
- Select the Right Platform & Hardware Choose between tethered PCs or standalone headsets, considering scalability, user experience, and total cost of ownership.
- Design Engaging Scenarios Partner with VR specialists to build intuitive applications tailored to your industry and workflows.
- Measure Impact & Iterate Track metrics such as session duration, learning outcomes, and sales conversions to continuously refine your VR solution.
Case Study: VR Training & AR-Enhanced Service
See how we combined immersive VR training with augmented reality service support for a leading industrial client. Explore the full project details: Case Study – VR Training & AR ServiceConclusion
Virtual Reality is no longer just an attention-grabber at trade fairs. It’s a comprehensive solution for product presentations, VR training, and process optimization. By crafting strategic scenarios and measuring ROI, you can deploy a scalable, engaging VR platform that drives results now and into the future. Ready to unlock the full potential of VR in your business? Contact us today and take the next step in immersive innovation!2 July 2020
0
How to debug your Unity application?
Sometimes things happen or do not happen differently than we expect it. That often requires a thorough investigation of the causality and flow of our code. The first move would normally be to throw a Debug.Log somewhere, where we expect our problem to happen. However, what if it is not enough?
Another parameter in Debug.Log
The problem: You got a number of objects with the same name that would be normally hard to distinguish.
public static void Log(object message, Object context); If you pass a GameObject or Component as the optional context argument, Unity momentarily highlights that object in the Hierarchy window when you click the log message in the Console.Now what does this mean? Let's say our debug code looks like this: [csharp] public class FindMe : MonoBehaviour { void Start() { Debug.Log("ヘ(・ω| Where am I?", this); } } [/csharp] Then by simply clicking on the log, the corresponding object will be highlighted. In this case a
Component
was passed (this refers to the class we're at, which eventually inherits after Component). Similarly, any GameObject
could be passed.

UnityEngine.Object
could be used. This is exactly the case and we can use it to locate anything that has an instance ID. It includes but is not limited to classes like: Material
, ScriptableObject
, and Mesh
.
Bonus fact: We can use EditorGUIUtility.PingObject
to use the highlight functionality without writing logs. Link
Making the logs pop
The problem: You need to log a lot of things and while viewing the data and categorize it quickly at a glance. Using the search box and collapsing the logs do not work as well as the order of messages is important and a value that is searched for is not that obvious. The solution: Spice up your logs visually. Log statements can take in Rich Text tags. Differentiating by color is way faster for eyes than reading each line. In the following example, if a slightly concerning value is shown, then the color of the value changes. This is easily noticed almost immediately.
Stopping the flow
The problem: The special case denoted by color is not understood enough once it passed. You need to inspect the scene and/or look at the exact execution in code. The solution: Firstly, the more known approach. Setting up a breakpoint. Unity and Visual Studio work quite well together for this purpose. If on the Visual Studio side for basic debugging, link to the official docs. Getting it working in Unity is quite simple:- Click to the left of the line of code you want the break to happen at.
- Attach to Unity.
- Press Play.
- The breakpoints is hit. From there Visual Studio allows to inspect Local values, investigate the Call Stack and use many other Visual Studio debugging tools.
Debug.Break
is your friend. It acts as if the pause button was pressed, except it is exactly at the desired line in code. Then the state of the scene can be fully explored. Alternatively use a debug using Debug.LogError
while enabling the option to pause on error.
Adjusting the console window
The problem: There are a myriad types of data that may need to be displayed. Not every type fits neatly in the limited space provided by the log entry. The solution: Look into the options of the Console window and select as necessary. Additionally, timestamps can be enabled/disabled.
What did I just build?
The problem: Build completed with a result of 'Succeeded' is not enough information about what has just happened. Especially when looking to slim down the size of the app. The solution: Console window options, then "Open Editor Log". and select as necessary. A text file opens. Starting from section of Build Report there is a detailed breakdown of what assets went in and how much space do they take up, conveniently in descending order.Logging based on preprocessor directives
The problem: There is a system, that is so vital to the whole app that every time it is being interacted with, you have a set of key debug statements that you want to know about. However it is too many to simply comment and uncomment every time. The solution: Creating a static class that is active with a custom directive will call the corresponding log method with custom prefix. To quickly turn that off have another class with exactly the same name and methods that is active with negation of the directive. Then inEdit > Project Settings > Player > Other Settings > Scripting Define Symbols
add the symbol that is used to filter this particular logger. The following example assumes naming conventions for a networking module selective logging:
[csharp]
using UnityEngine;
#if NET_LOGGER && (UNITY_EDITOR || DEVELOPMENT_BUILD)
public static class DebugNet {
public static void Log(string message) {
Debug.Log($"[NET] {message}");
}
public static void LogWarning(string message) {
Debug.LogWarning($"[NET] {message}");
}
public static void LogError(string message) {
Debug.LogError($"[NET] {message}");
}
}
#else
public static class DebugNet {
public static void Log(string message) { }
public static void LogWarning(string message) { }
public static void LogError(string message) { }
}
#endif
[/csharp]
To ensure that the logs never end up in the final build, display will also be dependent on the build being a development variant or in the editor. To learn more about directives that Unity provides, have a look in the documentation.
A potential drawback of this solution is that it does leave debug statements in the main code flow (even if you can hide them in a way that minimizes impact on performance). On the other hand, it is like leaving a comment in code of what is expected to have happened at that particular point in time.
Numbers not enough?
The problem: There is a lot of data. This data does not make much sense when viewed as numbers. Maybe it denotes some direction, or area, or it is a number/string, but it's very important to know at a glance its current value and what object it's attached to. The solution: Depending on the needs, there are many approaches to debug by drawing things on the screen.Debug class
The quickest way to draw a line is via the Debug class. From documentation (line, ray):public static void DrawLine(Vector3 start, Vector3 end, Color color = Color.white, float duration = 0.0f, bool depthTest = true); public static void DrawRay(Vector3 start, Vector3 dir, Color color = Color.white, float duration = 0.0f, bool depthTest = true);Those two methods are especially useful in debugging things like raycast. The simple example would be to see how your object sees the world in front of them. This could highlight many possible issues that arise from improper layer setups or wrongly defined raycast source and direction. [csharp] [SerializeField] float hitDist = 1000; void Update() { Ray ray = new Ray(transform.position, transform.forward); RaycastHit hit; if (Physics.Raycast(ray, out hit, hitDist)) { Debug.DrawLine(ray.origin, hit.point, Color.green); //Do your thing on hit } else { Debug.DrawLine(ray.origin, ray.GetPoint(hitDist), Color.red); } } [/csharp] This snippet shows how to visualize your raycast. Do not that if the point is not hit,
DrawLine
is used also. This is so that we don't draw a line of infinite length, but the actual distance that is being tested via raycast. The film shows behavior with the aforementioned code:

Gizmos and Handles
If you need more custom shapes to be displayed than simple lines then the Gizmos context is your friend. It can help you draw all sorts of basic shapes, as well as custom meshes. In the following example bounds are being visualized in a similar way a box collider might do it. [csharp] public class BoundingBox : MonoBehaviour { [SerializeField] Bounds bounds; [SerializeField] bool filled; void OnDrawGizmosSelected() { Gizmos.matrix = transform.localToWorldMatrix; if (filled) { Gizmos.DrawCube(bounds.center, bounds.size); } Gizmos.DrawWireCube(bounds.center, bounds.size); } } [/csharp] To ensure the coordinates of the bounds drawn are responsive to the transform component, a matrix that will transform each drawn point from local space to world space is set. There is also a Handles class is generally used to make Editor tools, as it has got mostly methods that return a value if the user modifies something. However, for debugging it has one major advantage that Gizmos doesn't have. It has a handy way to add labels to your objects (documentation).public static void Label(Vector3 position, string text, GUIStyle style);[csharp] void OnDrawGizmos() { Handles.Label(transform.position, $"{gameObject.name} {transform.position.ToString()}", EditorStyles.miniButton); } [/csharp] This snippet demonstrates how to draw a temporary label over the object. It is drawn at the position of the object and displays the object name and position. This can be used to display all kinds of data that you need to know the current state of some variable, and to which object they correspond to with a single glance. As the label is white and thin by default, a style was applied to rectify that. For quick setup that will be visible with any background EditorStyles class was used that houses a button type display.

OnDrawGizmos
, OnDrawGizmosSelected
and those with proper attribute usage. If you try it in any other place, it will not work. This means that Gizmos are specific to Components. If you need to draw things from an Editor window, then Handles are the only option.
In conclusion...
As a final note sometimes to aid with the debug code some third party tools or scripts are needed. One should always be mindful of adding robust plugins, as it may have many functionalities that are simply not needed for the project, and might even hinder it. That being said, now you have hopefully learned some techniques that you might be considering to use when tackling a new bug or aiming to gain a greater understanding of your system. Keep in mind though that a solution should always be tailored to the problem and using a shiny new technique is usually not the way to go. Always take a step back to consider what is actually needed.2 March 2020
0
How NOT to Write Code in React JS
React, Vue, Angular, Ember, Backbone, Polymer, Aurelia, Meteor, Mithril.js, Preact... There are many super fancy Javascript frameworks nowadays. We can write anything we want, it's comfortable, easy to understand, although difficult to master. After few lines of code, even after writing few small applications you may think that no matter what you write, these frameworks will do the job.
Yeah, what you see above is the iconic series Star Wars and Chewbacca being very skeptical about Han's idea. In the programming universe, you should have this Chewbacca in your mind and this Chewbacca should always be skeptical about your code. It's very important to write your code carefully and thoughtfully no matter what framework you are using. I know it looks that these frameworks do everything for you and you don't have to worry about anything but it's not entirely true. Buckle up, in this article we are going to go through the most common mistakes done in React (and probably other similar frameworks/libraries). I am probably one of the most reliable people to talk about it because I used to make some of these mistakes for a loooong time. And probably I'm still doing some of them.
New feature? Yeah.. One component is enough.
Nope. Nine times out of ten it won't be enough. Imagine that, in your application, you have a list of board games with some simple filtering controls over the table. You know, choosing a price, an age or a type. At the beginning it looks like it's enough to create a BoardGamesList component and put all of the logic inside. You may think that it doesn't have sense to create a separate BoardGameRow and BoardGamesFilters components. Or even PriceFilter, AgeFilter and BoardGameTypeFilter. "It's just a few lines! I don't have time to create all these components with so few lines of code." It's just a few lines for now. But it's very likely that during next year your client will be requiring few more filter options, some fancy icons, 5 ways of ordering the game and 10 ways to display the game row depending on something. Trust me, in my programming life I have experienced too many components which were small at the beginning and after a year it was a massive, uncontrollable piece of sh.. component. Seriously, if you take a few moments and divide it functionally at the beginning, it'll be much easier to work with this component in future. For you. For your colleagues. And even if it stays so small, it'll be easier to find what you need if you rationally divided it into separate React components. Then your work will look like this:- Hey, we have some weird bug while filtering board games by age. Can you do something about it? - Yeah, I know exactly where to find it. It's in PriceFilter.js and it's so small that I will need half an hour to fix it! - Great, I think you should get a pay rise!

setState? Pff.. I don't have to read the documentation.
If you decided to use React state in your components, you should know that its main function, setState, is asynchronous. What means you can't just put some important code depending on your state just after setState execution. Let's look at this case: [js] this.setState({isItWorking: true}); console.log(this.state.isItWorking); // returns false, WHY?! [/js] Method setState is asynchronous what means it needs few moments to properly set the data you passed. How to handle this problem correctly? The most common way is to pass a callback function as a second parameter which will be executed when the data is passed to the state. [js] this.setState({isItWorking: true}, () => { console.log(this.state.isItWorking); // and now it returns true }); [/js] Sometimes you have to do some consecutive operations on your state. Because of setState asynchrony, you may receive unexpected results. [js] // this.state.name = 'STAR WARS: ' this.setState({name: this.state.name + 'THE RISE OF '}); this.setState({url: this.state.name + 'SKYWALKER'}); [/js] Unfortunately, you won't finish with the real name of episode IX - STAR WARS: THE RISE OF SKYWALKER. Probably you will get partially filled title like STAR WARS: SKYWALKER. It would be a nice title but it's not what we wanted because the second setState has been overwritten by the last one. To fix it you can use one more time the callback technique but there is another way to handle this case. Instead of passing a new object you can pass a function which returns an object. What's the difference? This function's first parameter is the current "version" of state so you will always work on the updated state. [js] // this.state.name = 'STAR WARS: ' this.setState(state => ({name: state.name + 'THE RISE OF '})); this.setState(state => ({name: state.name + 'SKYWALKER'})); [/js] If it's not enough for you and you want to know how setState works internally it'll be a smart choice to read an article from Redux co-author, Dan Abramov: How Does setState Know What to Do?Hey! Why this.props is undefined?
This mistake is still very common, no matter that arrow functions are one of the main features of ES6 specification. [js] handleFieldChange() { console.log(this.props); // return undefined } [/js] Why? I am inside the component, I should have access to my props! Unfortunately not. This function has its own this (different than component's this) and if you want to use the standard function you should consider binding this with .bind(this) or not so beautiful const self = this before the function. Much easier and a simply better option is to use ES6 arrow functions. [js] handleFieldChange = () => { console.log(this.props); // YEAH! It returns my props! } [/js] Arrow function uses something what is called lexical scoping. In a simple way - it uses this from the code containing arrow function - in our case, the component. That's it. No more bindings, no more awful selves, no more unexpectedly missing variables. Arrow functions are also very useful if you need to propagate a few functions. For example, you need a setup function which takes some important parameters and then returns the generic handler function. [js] handleFieldChange = fieldName => value => { this.setState({[fieldName]: value}); // [fieldName] - for your knowledge, it's dynamic key name, it'll take the name you pass in fieldName variable } [/js] This is a very generic way to create a function that receives field name and then returns the generic handler function for, let's say, an input element. And if you execute it like that.. [js] <Input onChange={this.handleFieldChange('description')} />; [/js] ..your input will have this classic function assigned to onChange event: [js] handleFieldChange = value => { this.setState({description: value}); } [/js] You should also know that you can fully omit curly braces if you have something very short to return. [js] getParsedValue = value => parseInt(value, 10); [/js] In my opinion, in most cases you should avoid it because it can be difficult to read it. On the other hand, in simple cases like above, it'll save you a few lines. But you should be careful doing it. Let's say, I have single object to return. I decide to return it in one line because it's a really short line. [js] getSomeObject = value => {id: value}; [/js] Oh yeah, you may think that based on the previous code example it should definitely work. But it's not and it's quite easy to explain. In this case, the system thinks that you use standard arrow function and these curly braces are just the beginning and the end of the function. If you really want to return an object in one line you should use this syntax: [js] getSomeObject = value => ({id: value}); [/js] In this case, the returned object is contained in the brackets and it works like intended. Personally, I don't like using one line functions but it's a very nice way to pass a short code to functions like map or filter. Clean, easy to read and it's included in one line. [js] someCrazyArray.map(element => element.value).filter(value => value.active); [/js] Okaaaaaaay, quite a lot of info about simple arrow functions but I think it's valuable if you didn't know about it. Let's go to the next one of the ReactJS mistakes!The browser has crashed.. Why?
I can't even count how many times I or my buddies were struggling with some weird browser crash or inability to do any action in the application. In many cases, the reason is an infinite loop. I suppose there are billions of ways to get the infinite loop but in React the most common is componentWillReceiveProps or any other React lifecycle method. ComponentWillReceiveProps is currently deprecated but I will focus on this one because there are plenty of React applications still using it and for me most of these bugs happened in this very important lifecycle method. I have multiple examples in my mind which can help visualize the problem but for the purposes of this article I will present this use-case based on the board games example:Every time a user changes the age, the application should load board games for the chosen age.
[js] componentWillReceiveProps(nextProps) { if (nextProps.age) { this.loadBoardGames(nextProps.age); } } [/js] "Right, if there is the age passed, I load board games." If you don't know how this lifecycle works you may end up with the solution similar to above. But this lifecycle method doesn't work exactly like that. First, every time there is some change in component's props, componentWillReceiveProps is executed. It's quite easy to understand. But you may still think: "Okay, so every time the age is changed, it'll load board games. Isn't it okay?". Partially yes, but in most cases there are other props in your component. Props which will also trigger this lifecycle function. Imagine that we have also boardGames prop (where we store currently displayed board games). Let's examine such a situation:- Age prop is changed
- componentWillReceiveProps is executed (what causes board games' load)
- Board games prop is changed
- componentWillReceiveProps is executed (what causes board games' load)
- Board games prop is changed
- componentWillReceiveProps is executed (what causes board games' load)
- INFINITE LOOP!!!

Something else?
No, that's it. There are many different ways to crash or mess your React application but I chose these which I experienced myself. I hope it was a valuable reading for you. Now.. let's code!10 December 2019
0
Neo4j with Spring Boot
In this article, I will show you the advantages and disadvantages of the neo4j graph database, the technology that is being used by big companies like Google, Facebook or PayPal. I will also show you how to create and populate it with the help of Spring Boot. Why a graph database…? The main application of graph databases is to store relationship information as a first-class entity, because, despite the name, relational databases do not describe relationships other than the standard many-to-many, one-to-one and one-to-many. A huge advantage of graph databases is that the performance is not reduced with the growth of the amount of data. ...and why neo4j? Apart from the above points, the neo4j itself has a number of advantages, such as:
- scalability
- good documentation
- easy to use
- built-in Spring Boot support
- wide user community


10 June 2019
1
6 Tips That Every MySQL User Should Know
Over the last 3 years, I have been working with MySQL almost every day. Even though non-relational databases like MongoDB are gaining more and more popularity every year, traditional SQL solutions are still widely used for many purposes. In this post, I will share some of my tricks I have been using to make my life easier. Note: most of those tips apply only to development machines, for production you should take more care.
1. Run MySQL in Docker
Seriously, in 2018 there is absolutely no need to run MySQL server natively by installing it, setting users and passwords, performing upgrades etc. You are wasting your client's time if you are still doing it. Just use the sample Docker Compose file as a working starting point: [code] version: '3' services: mysql: image: mysql:5.7 environment: - MYSQL_ROOT_PASSWORD=root - MYSQL_DATABASE=myproject ports: - 127.0.0.1:3306:3306 volumes: - ./varlibmysql:/var/lib/mysql [/code] After docker-compose up, you will have a working server with localhost-only port bound on standard port 3306, one user root/root and a pre-created "myproject" database. Throw in restart: always into Compose file if you want to keep the server running across reboots. That is enough for 95% of software projects, this solution is completely disposable and easy to recreate. Note: I still have MySQL client installed natively on my development machine. Technically speaking, there is a way of avoiding this too and running the client itself from Docker image, but that is a matter of preference.2. Store MySQL data in RAM
In Docker world, the best practice for handling data storage is to store it outside of the container filesystem - in case of a development machine, in some mounted directory on the host. A cool trick, at least on Linux, is to create a RAM-based filesystem called tmpfs and use it as data storage for MySQL. This will, of course, result in data loss after machine restart, but who cares about development data? If it is important, you should have a recent and verified backup anyway, right? In my case, I am mounting a folder /tmp/varlibmysql into container /var/lib/mysql, since I am using ramdisk for the whole temporary directory to limit SSD wearing. So the relevant part of Compose file is: [code] ... volumes: - /tmp/myproject/varlibmysql:/var/lib/mysql ... [/code] There is a noticeable performance gain with this configuration: I measured the time it takes to run a few hundred Liquibase migrations on application startup and the time it takes to import ~1GB database dump.- migrations: SSD 0:44, ramdisk 0:07 - huge speedup
- import: SSD 5:23, ramdisk 4:14 - small but noticeable speedup
3. Manage database dumps like a boss
I personally dislike edge case bugs, which often appear in web applications. One of the ways for a tester to describe such rarely occurring bug is to provide a list of bazillion intricate steps, that have to be carefully performed in order for the bug to appear. It is much easier to create a bug report with a database dump attached, which contains the whole state of the application. As a result, explaining and more importantly reproducing a bug is much easier and faster. However, if every time this happens there is a need to go to StackOverflow to recall mysqldump syntax, no one will want to do this. So let's fix the issue once and for all: [code] $ cat export.sh #! /bin/bash set -e DATABASE=myproject if [ "$#" -ne 1 ]; then echo "Usage: export.sh <filename.sql.bz2>" exit 1 fi echo "Exporting to $1..." mysqldump --protocol tcp -h localhost -u root -proot ${DATABASE} \ | pv \ | bzip2 > "$1" echo "Export finished." [/code] [code] $ cat import.sh #! /bin/bash set -e DATABASE=myproject if [ "$#" -ne 1 ]; then echo "Usage: import.sh <filename.sql.bz2>" exit 1 fi echo "Importing from $1..." bzcat "$1" \ | pv \ | mysql --protocol tcp -h localhost -u root -proot ${DATABASE} echo "Importing finished." [/code] [code] $ cat drop.sh #! /bin/bash set -e DATABASE=myproject echo "Dropping and recreating ${DATABASE} ..." mysql --protocol tcp -h localhost -u root -proot ${DATABASE} \ -e "drop database ${DATABASE}; create database ${DATABASE};" echo "Done." [/code] These are 3 scripts I use every day for SQL export, import and one extra for recreating a database for testing purposes. Those scripts use bzip2 compression for minimum file size and pv tool for visualising data flow.4. Log executed queries
Recently I have been fixing some performance problems in one of our projects. Our business contact reported that "this specific webpage is slow when there are lots of users present". I started looking around in Chrome Developer Tools and it became clear that the issue is on the backend side, as usual... I could not see any obvious bottlenecks in Java code, so I went a layer down into the database and yep, there was a performance problem there - some innocent SQL query was executed thousands of times for no reason. In order to debug such cases, query logging is a must, otherwise we are shooting in the dark. You can enable basic query logging using those commands in MySQL console: [code] MySQL [myproject]> SET global general_log = 1; Query OK, 0 rows affected (0.00 sec) MySQL [myproject]> SET global log_output = 'table'; Query OK, 0 rows affected (0.00 sec) [/code] From now on, all queries will be logged in special table mysql.general_log. Fun fact - this table is actually a real physical table, it can be searched, exported etc. - good for documenting bugfixes. Let's create some sample database structure: [code] MySQL [myproject]> create table random_numbers(number float); Query OK, 0 rows affected (0.00 sec) MySQL [myproject]> insert into random_numbers values (rand()), (rand()), (rand()), (rand()); Query OK, 4 rows affected (0.00 sec) Records: 4 Duplicates: 0 Warnings: 0 [/code] And now run a few queries and see if they are captured in the log: [code] MySQL [myproject]> select * from random_numbers where number < 0.1; Empty set (0.00 sec) MySQL [myproject]> select * from random_numbers where number < 0.5; +----------+ | number | +----------+ | 0.254259 | +----------+ 1 row in set (0.00 sec) MySQL [myproject]> select * from random_numbers where number < 0.9; +----------+ | number | +----------+ | 0.777688 | | 0.254259 | +----------+ 2 rows in set (0.00 sec) MySQL [myproject]> select event_time, argument from mysql.general_log; +----------------------------+-------------------------------------------------+ | event_time | argument | +----------------------------+-------------------------------------------------+ | 2018-11-26 12:42:19.784295 | select * from random_numbers where number < 0.1 | | 2018-11-26 12:42:22.400308 | select * from random_numbers where number < 0.5 | | 2018-11-26 12:42:24.184330 | select * from random_numbers where number < 0.9 | | 2018-11-26 12:42:28.768540 | select * from mysql.general_log | +----------------------------+-------------------------------------------------+ 4 rows in set (0.00 sec) [/code] Perfect! Keep in mind that "all queries" means all of all, so if you are using graphical database tools for viewing query logs, those "query querying logs" will be also there.5. Use remote servers
MySQL protocol runs over TCP/IP (note to nitpickers: yes, it can also work through UNIX sockets, but the principle is the same). It is perfectly fine to use some remote MySQL server/service for local development instead of a local server. This is useful for working with big databases that would not fit onto tiny laptop SSD, or if we need more performance. The only concern is network latency, which can sometimes diminish any performance gains - but I think it is still a useful trick. Let's try Amazon RDS. It is pretty expensive for long-term usage but affordable for one-off tasks. Graphical setup wizard in AWS web console is pretty easy to use so I will not cover it here in full, however, keep attention on:- DB instance class - pick the cheapest one for start, you can always upgrade it later if needed
- Allocated storage - the minimum (20 GiB) is actually a huge amount of space for a typical project, but you can increase it now if you need
- Username and password - pick something secure, because our instance will be publicly visible from the Internet
- Security group - pick/create a security group with full access from the Internet



6. Learn some SQL!
Finally, even if you love ORMs and you can't live without Spring Data and sexy dynamic finders (which can sometimes be long enough to wrap on 4k screen), it is very beneficial to learn at least some SQL to understand how everything works underneath, how ORMs are mapping one-to-many and many-to-many relationships with the use of extra join tables, how transactions work (very important in high load systems) and so on. Also, some kinds of performance problems ("N+1 select" being the most common) are completely undiscoverable without knowing how underlying SQL is generated and subsequently executed. And that is all. Thanks for reaching this point and I hope you've learnt something.15 April 2019
0
Let’s shake some trees – how to enhance the performance of your application
Nowadays JavaScript applications are getting bigger and bigger. One of the most crucial things while developing is to optimise the page load time by reducing the size of the JavaScript bundle file.
JavaScript is an expensive resource when processing and should be compressed when it is about to be sent over the network.
One of the most popular techniques to improve the performance of our applications is code splitting. It is based on splitting the application into chunks and serving only those parts of JavaScript code that are needed at the specified time. However, this article is going to be about another good practice called tree shaking.
Tree shaking is used within the ES2015 import and export syntax and supports the dead-code elimination. Since Webpack 4 released it is possible to provide the hint for a compiler by the “sideEffects” property to point the modules that can be safely pruned from the application tree if unused. Function may be supposed to have side effects if it modifies something outside its own scope.
Real life example
In order to introduce tree shaking concept more precisely, let’s start with creating a new project including the application entry point (index.js) and the output bundle file (main.js).
In the next step, a new JavaScript file (utils.js) is added to the src directory...
[code]
export function foo() {
console.log('First testing function')
}
export function bar() {
console.log('Second testing function')
}
[/code]
...and imported in the index.js.
[code]
import { foo } from './utils.js'
foo()
[/code]
Webpack 4 introduces the production and development mode. In order to get a not minified output bundle, we are supposed to run a build process in the development mode what can be defined in the package.json file.
[code]
"scripts": {
"dev": "webpack --mode development",
"build": "webpack --mode production"
}
[/code]
Now, just get the terminal and run: npm run build script.
Despite, only the foo function has been required in the entry point, our output bundle still consists of both foo and bar methods. Bar function is known as a “dead code” since it is unused export that should be dropped.
[code]
console.log('First testing function');\n\nfunction bar() {\n console.log('Second testing function')
[/code]
To fix this problem we are about to set the “sideEffects” property in package.json file.
[code]
{
"name": "tree-shaking",
"version": "1.0.0",
"sideEffects": "false",
}
[/code]
That property just tells the compiler that there are not any side effects files and every unused code can be pruned. It accepts also absolute and relative paths to the files that should not be dropped due having some side effects.
[code]
{
"name": "tree-shaking",
"version": "1.0.0",
"sideEffects": "./src/file-wth-side-effects.js",
}
[/code]
Minification
After we pruned unused ES6 imports and exports, we still need to remove “dead code” from the application bundle. The only thing we have to do is to set the mode configuration to production and execute npm run build.
[code]
([function(e,t,n){"use strict";n.r(t),console.log("First testing function")}]);
[/code]
As we can see, the second testing function is no more included in the bundle minified file.
Use three shaking with the ES6
It is crucial to keep in mind that tree shaking pattern can be used only within the ES6 import and export modules. We cannot “shake the tree” while using the CommonJS without the help of special plugins. To solve this issue, setting the babel-preset-env to leave the ES6 modules on their own should be performed.
[code]
{
"presets": [
["env", {
"modules": false
}]
]
}
[/code]
Exception
Removing unused modules does not work while dealing with lodash. Lodash is one of the most popular utility library. If you import a module in the way it is depicted below, it will still pull all the lodash library.
[code]
import { join } from 'lodash'
[/code]
To go around that, we need to install lodash-es package and require the modules in the following way:
[code]
import join from 'lodash-es/join'
[/code]
Conclusion
Let’s prove the statement from the beginning of this article! Let’s take a closer look at the sizes of the bundle file (main.js) before and after the tree shaking and minification process.
As we can see we minimized the output size significantly. When you start using tree shaking in your projects it may seem not taking a big advantage of application performance at all. You will notice how it really boosts up your work while having a more complex application tree.
14 January 2019
0
Budget-Friendly Kubernetes: How to Run Your Cluster Without Breaking the Bank
Since the last few years, Kubernetes has proved that it is the best container orchestration software on the market. Today, all 3 biggest cloud providers (Amazon, Google and Azure) are offering a form of managed Kubernetes cluster in form of EKS, GKE and AKS respectively. All of those offerings are production-ready, they are fully integrated with other cloud services and they include commercial support. There is one problem, though, and it is the price. Typically, cloud offerings like this are targeted for big organizations with a lot of money. For example, Amazon Elastic Kubernetes Service costs 144$/mo for just running cluster management ("control plane"), and all compute nodes and storage are billed separately. Google Kubernetes Engine does not charge anything for the control plane, but instances to run nodes at aren't cheap either - a reasonable 4 GiB machine with only a single core costs 24$/mo. The question is, can we do it cheaper? Since Kubernetes itself is 100% open source, we can install it on our own server of choice. How much we'll be able to save? Big traditional VPS providers (like OVH or Linode for instance) give out decent machines at prices ranging from 5$/mo. Could this work? Well, let's try it out!
Setting up the server
Hardware
For running our single-master single-node host we don't need anything too fancy. The official requirements for a cluster bootstrapped by kubeadm tool, which we are going to use, are as follows:- 2 GiB of RAM
- 2 CPU cores
- reasonable disk size for basic host software, Kubernetes native binaries and it's Docker images
Installing Docker and kubeadm
Here we will be basically following the official guide for kubeadm, which is a tool for bootstrapping a cluster on any supported Linux machine. First thing is Docker - we'll install it through the default Ubuntu repo, but generally, we should pay close attention to Docker version - only specific ones are supported. I've run into many problems creating a cluster with the current newest version (18.06-*), so I think this is worth mentioning. [code] $ apt-get update $ apt-get install -y docker.io [/code] And let's run a sample image to check if the installation was successful: [code] $ docker run --rm -ti hello-world Unable to find image 'hello-world:latest' locally latest: Pulling from library/hello-world 9db2ca6ccae0: Pull complete Digest: sha256:4b8ff392a12ed9ea17784bd3c9a8b1fa32... Status: Downloaded newer image for hello-world:latest Hello from Docker! This message shows that your installation appears to be working correctly. (...) $ [/code] Good. The next step is kubeadm itself - note that the last command (apt-mark hold) will prevent APT from automatically upgrading those packages if a new version appears. This is critical because cluster upgrades are not possible to be done automatically in a self-managed environment. [code] $ apt-get update && apt-get install -y apt-transport-https curl $ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg \ | apt-key add - $ cat <<EOF >/etc/apt/sources.list.d/kubernetes.list deb http://apt.kubernetes.io/ kubernetes-xenial main EOF $ apt-get update $ apt-get install -y kubelet kubeadm kubectl $ apt-mark hold kubelet kubeadm kubectl [/code]Setting up a master node
The next step is to actually deploy Kubernetes into our server. Those 2 basic commands below initialize the master node and select Flannel as an inter-container network provider (an out-of-the-box zero-config plugin). People with experience with Docker Swarm will immediately notice the similarity in init command below - here we specify two IP addresses:- apiserver advertise address is the address at which, well, apiserver will be listening - for single-node clusters we could put 127.0.0.1 there, but specifying external IP of the server here will allow us to add more nodes to the cluster later, if necessary,
- pod network CIDR is a range of IP addresses for pods, this is dictated by the network plugin that we are going to use - for Flannel it must be that way, period.
Testing our setup
Testing - pod scheduler
Kubernetes is a really complex piece of container orchestration software, and especially in the context of a self-hosted cluster, we need to make sure that everything has been set up correctly. Let's perform one of the simplest tests, which is to run the same hello-world image, but this time, instead of running it directly through Docker API, we'll tell Kubernetes API to run it for us. [code] $ kubectl run --rm --restart=Never -ti --image=hello-world my-test-pod (...) Hello from Docker! This message shows that your installation appears to be working correctly. (...) pod "my-test-pod" deleted [/code] That's a bit long command. Let's explain in detail what are the parameters and what just happened. When we are using kubectl command, we are talking through the apiserver. This component is responsible for taking our requests (under the hood, HTTP REST requests, but we are using it locally) and communicating our needs to other components. Here we are asking to run an image of name hello-world inside temporary pod named my-test-pod (just a single container), without any restart policy and with automatic removal after exit. After running this command, Kubernetes will find a suitable node for our workload (here we have just one, of course), pull the image, run its entrypoint command and serve us it's console output. After the process finishes, pod is deleted and the command exits.Testing - networking
The next test is to check if networking is set up correctly. For this, we will run Apache web server instance - conveniently, it has a default index.html page, which we can use for our test. I'm not going to cover everything in YAML configurations (that would take forever) - please check out pretty decent official documentation. I'm just going to briefly go through concepts. Kubernetes configuration mostly consists of so-called "resources", and here we define two of them, one Deployment (which is responsible for keeping our Apache server always running) and one Service of type NodePort (which will allow us to connect to Apache under assigned random port on the host node). [code] $ cat apache-nodeport.yml --- apiVersion: apps/v1 kind: Deployment metadata: name: apache-nodeport-test spec: selector: matchLabels: app: apache-nodeport-test replicas: 1 template: metadata: labels: app: apache-nodeport-test spec: containers: - name: apache image: httpd ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: apache-nodeport-test spec: selector: app: apache-nodeport-test type: NodePort ports: - port: 80 $ kubectl apply -f apache-nodeport.yml [/code] It should be up and running in a while: [code] $ kubectl get pods NAME READY STATUS RESTARTS AGE apache-nodeport-test-79c84b9fbb-flc9p 1/1 Running 0 25s $ kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) apache-nodeport-test NodePort 10.108.114.13 <none> 80:31456/TCP kubernetes ClusterIP 10.96.0.1 <none> 443/TCP [/code] Let's try curl-ing our server, using a port that was assigned to our service above: [code] $ curl http://139.59.211.151:31456 <html><body> <h1>It works!</h1> </body></html> [/code] And we are all set!How to survive in the Kubernetes world outside a managed cloud
Most DevOps people, when they think "Kubernetes" they are thinking about a managed offering of one of the biggest cloud providers, namely Amazon, Google Cloud Platform or Azure. And it makes sense - such provider-hosted environments are really easy to work with, they provide cloud-expected features like metrics based autoscaling (both on container count and node count level), load balancing or self-healing. Furthermore, they provide well-integrated solutions to (in my opinion) two biggest challenges of containerized environments - networking and storage. Let's tackle the networking problem first.Kubernetes networking outside the cloud
How do we access our application running on the server, from the outside Internet? Let's assume we want to run a Spring Boot web hello world. In a typical traditional deployment, we'll have our Java process running on the host, bound to port 8080 listening to traffic, and that's it. In case of containers and especially Kubernetes, things get complicated really quickly. Every running container is living inside pod, which has its own IP address from pod network CIDR range. This IP is completely private and invisible for clients trying to access the pod from the outside world.

Kubernetes storage outside the cloud
The second biggest challenge in the container world is storage. Containers are by definition ephemeral - when a process is running in a container, it can make changes to the filesystem inside it, but every container recreation will result in loss of this data. Docker solves it by the concept of volumes, which is basically mounting a directory from the host at some mount point in container filesystem, so changes are preserved.
Let's actually deploy something
We have our single-node cluster up and running, ready to run any Docker image, with external HTTP connectivity and automatic storage handling. As an example, let's deploy something everyone is familiar with, namely Wordpress. We'll use official Google tutorial with just one small change: we want our blog to be visible under our domain name, so we remove "LoadBalancer" from service definition and add an appropriate Ingress definition. [code] $ cat wp.yml --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mysql-pv-claim spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi --- apiVersion: apps/v1 kind: Deployment metadata: name: wordpress-mysql spec: selector: matchLabels: app: wordpress-mysql strategy: type: Recreate template: metadata: labels: app: wordpress-mysql spec: containers: - image: mysql:5.6 name: mysql env: - name: MYSQL_ROOT_PASSWORD valueFrom: secretKeyRef: name: mysql-pass key: password ports: - containerPort: 3306 name: mysql volumeMounts: - name: mysql-persistent-storage mountPath: /var/lib/mysql volumes: - name: mysql-persistent-storage persistentVolumeClaim: claimName: mysql-pv-claim --- apiVersion: v1 kind: Service metadata: name: wordpress-mysql spec: ports: - port: 3306 selector: app: wordpress-mysql --- apiVersion: v1 kind: Service metadata: name: wordpress-web labels: app: wordpress-web spec: ports: - port: 80 selector: app: wordpress-web --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: wp-pv-claim labels: app: wordpress spec: accessModes: - ReadWriteOnce resources: requests: storage: 20Gi --- apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2 kind: Deployment metadata: name: wordpress-web spec: selector: matchLabels: app: wordpress-web strategy: type: Recreate template: metadata: labels: app: wordpress-web spec: containers: - image: wordpress:4.8-apache name: wordpress env: - name: WORDPRESS_DB_HOST value: wordpress-mysql - name: WORDPRESS_DB_PASSWORD valueFrom: secretKeyRef: name: mysql-pass key: password ports: - containerPort: 80 name: wordpress volumeMounts: - name: wordpress-persistent-storage mountPath: /var/www/html volumes: - name: wordpress-persistent-storage persistentVolumeClaim: claimName: wp-pv-claim --- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: wordpress-web spec: rules: - host: kubernetes-for-poor.test http: paths: - path: / backend: serviceName: wordpress-web servicePort: 80 $ [/code] The only prerequisite is to create a Secret with the password for MySQL root user. We could easily and safely hardcode 'root/root' in this case (since our database isn't exposed outside the cluster), but we'll do it just to follow the tutorial. [code] $ kubectl create secret generic mysql-pass --from-literal=password=sosecret [/code] And we finally deploy. [code] $ kubectl apply -f wp.yml [/code] After a while, you should see a very familiar Wordpress installation screen at the domain specified in the Ingress definition. Database configuration is handled by dockerized Wordpress startup scripts, so there is no need to do it here like in a traditional install.
Conclusion
And that's it! Fully functional single-node Kubernetes cluster, for all our personal projects. Is this an overkill for a few non-critical websites? Probably yes, but it's the learning process that's important. How would you become a 15k programmer otherwise?13 November 2018
1
Implementing a Continuous Integration System with GitLab and Unity
After countless days and nights spent building applications by hand, we finally decided to streamline part of our workflow through automation. But... why? Our primary motivation for implementing a Continuous Integration system was to reduce the time spent on application builds. In our previous project, a single build sometimes took over half an hour, so we set out to simplify our lives. Here are the key features we aimed to achieve:
- Online access to downloadable builds
- Support for multiple Unity versions
- Building for iOS, Windows, and Android on a single Windows machine
- Storage of builds on our local server

C:\’Program Files’\Unity\Editor\Unity.exe | Launches the Unity executable |
-batchmode | Runs Unity without opening the editor |
-nographics | Ensures no GUI is initiated on Windows |
-executeMethod BuildScript.Build | Invokes the “Build” method in our BuildScript |
-projectPath %CI_PROJECT_DIR% | Points Unity to the project directory |
-quit | Closes Unity after the build completes |
-customProjectName %CI_PROJECT_NAME%
.
GitLab provides variables such as project name, pipeline ID, project directory, and build target. We pass these into our Unity script to determine which platform to build.
[code]
.build: &build
stage: build
variables:
tags:
- Unity
script:
- C:\"Program Files"\Unity\Hub\Editor\%UNITY_VERSION%\Editor\Unity.exe -projectPath %CI_PROJECT_DIR% -logfile D:\Logs\%BUILD_TARGET%.log -customBuildPath %BUILD_PATH% -customProjectName %CI_PROJECT_NAME% -customBuildTarget %BUILD_TARGET% -pipelineId %CI_PIPELINE_ID% -batchmode -nographics -executeMethod BuildScript.Build -quit
artifacts:
name: "%CI_PROJECT_NAME% %BUILD_TARGET% %CI_PIPELINE_ID%"
paths:
- ./Builds/%CI_PROJECT_NAME%/%CI_PIPELINE_ID%/%BUILD_TARGET%
[/code]
Builds on Demand
We offer four build options: Windows, Android, iOS, or All. You select the target in the “Run pipeline” dialog. Currently, GitLab CI doesn’t support lists of variables, so we enter them manually. If no target is specified, a build runs for all platforms. That’s why our “build” job is a template—each platform has its own job and only runs when the correct variable is provided.
[code]
Windows:
>>: *build
variables:
BUILD_TARGET: StandaloneWindows64
BUILD_PATH : ./Builds
only:
variables:
- $Target == "Windows"
- $Target == "windows"
- $Target == "All"
- $Target == null
[/code]



21 September 2018
0
I’ve got the power – how to control remotely your PC using Grails
In this article I will show you how to control remotely your windows computer via the local network using Linux machine and Grails application. First we will turn on a device with Wake-On-LAN (WOL) method and then turn it off using RPC shutdown command call. After this few steps, you won't have to push the power button on your computer any more. 1. Turn on a computer - wake up, darling!!! To begin with we have to check a few configuration things:
- check if BIOS allows to use Wake-On-LAN on the computer. Go to BIOS power management settings and find the Wake-On-Lan configuration there. If related option does not exist it probably means that BIOS automatically supports WOL - most of new hardware does that. When you find an appropriate option check if it is enabled. Depends on your computer motherboard, it is automatically enabled or you will have to set it manually.
- check your system configuration. Open Windows Device Manager, find your local network device. Right click on a device and select "Properties". Next, select Advanced tab, find a "Wake on Magic Packet" option and check if it is enabled.

- Firstly, check if your Linux system has samba packages installed. We will use "net rpc" command to communicate with remote computer. For Ubuntu/Debian Linux distribution it is included in samba-common-bin package. To install samba use console command "sudo apt-get install samba-common-bin".
- Configure your windows machine to disable UAC remote restrictions. Locate the following registry subkey: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System. Check if the LocalAccountTokenFilterPolicy registry entry does exist. If not, create a new entry with DWORD value "1".
- check if firewall has opened port 445 for TCP connection. When not exist then you should add a new role in the firewall.
21 August 2018
0
Populate database in Spring
Once upon a time there was BootStrap class. The class was a very friendly class, therefore it befriended many more classes. With more classes inside, BootStrap grew bigger and bigger, expanding itself at the rate of the entire universe. This is not a fairy tale.
This is not a fairy tale, because this is exactly what happened. But first things first, you may wonder what the BootStrap
is. It's a mechanism coming from Grails Framework, which executes code on application startup; I'm using it mostly to populate database. Just put the BootStrap.groovy
in grails-app/init
folder and add some goodies to the init
closure. Having a background in Grails, this is something I missed in Spring, especially because as I mentioned before, the code grew fairly big. I wanted to rewrite the whole BootStrap logic in Java, because its older Groovy version somehow reminded me of poorly written tests you may see here and there: verbose and ugly. It just wasn't a first-class citizen of the production code.
[java]
@Log4j @AllArgsConstructor
@Component
public class BootStrap {
private final BootStrapService bootStrapService;
@EventListener(ApplicationReadyEvent.class)
private void init() {
try {
log.info("BootStrap start");
bootStrapService.boot();
log.info("BootStrap success");
} catch (Exception e) {
log.error("BootStrap failed," + e);
e.printStackTrace();
throw e;
}
}
}
[/java]
Surprise, surprise! There's an EventListener
annotation that you can put on a method in order to track the ApplicationReadyEvent
and run some code on application startup. Job's done, right? Well, not really, you CAN do that, but do you WANT TO do that? I prefer to keep the business logic in a service, therefore I created and injected BootStrapService
, that just leaves logging and error handling here and Lombok's annotations make the whole thing even neater.
[java]
public abstract class BootStrapService {
@Autowired
protected BootStrapEntryService entryService;
@Autowired
protected MovieService movieService;
@Transactional
public void boot() {
writeDefaults();
}
protected void writeDefaults() {
entryService.createIfNotExists(BootStrapLabel.CREATE_MOVIE, this::createMovie);
}
private void createMovie() {
movieService.create("Movie");
}
}
[/java]
I made the boot
method @Transactional
, because it's our starting point to populate database, later in the project you might want to add here some data migration as well. The REAL rocket science begins in writeDefaults
method! The example createMovie
method reference is passed as a parameter, with double colon as a syntactic sugar, to the createIfNotExists
method of the injected BootStrapEntryService. The other parameter is a simple BootStrapLabel
enum, with a value used as a description for a given operation. I prefer to add a verb as a prefix, just not to be confused later, when a possibility of other operations comes up.
[java]
@Log4j @AllArgsConstructor
@Transactional
@Service
public class BootStrapEntryService {
private final BootStrapEntryRepository bootStrapEntryRepository;
public void createIfNotExists(BootStrapLabel label, Runnable runnable) {
String entryStatus = "already in db";
boolean entryExists = existsByLabel(label);
if(!entryExists) {
runnable.run();
create(label);
entryStatus = "creating";
}
log(label, entryStatus);
}
public boolean existsByLabel(BootStrapLabel label) {
return bootStrapEntryRepository.existsByLabel(label);
}
public BootStrapEntry create(BootStrapLabel label) {
BootStrapEntry bootStrapEntry = new BootStrapEntry();
bootStrapEntry.setLabel(label);
return bootStrapEntryRepository.save(bootStrapEntry);
}
private void log(BootStrapLabel label, String entryStatus) {
String entryMessage = "processing " + label + " -> " + entryStatus;
log.info(entryMessage);
}
}
[/java]
Finally, createIfNotExists
method is the place to call the actual methods to populate database, however, in a generic way. Method passed as a reference may be called, however, we don't want to write the data that was written to the database before, at least considering a not in-memory database, so we check if an entry for a given label already exists. We have to create an entity, a pretty simple entity, in this case the BootStrapEntry
, with just a label field to keep the labels in database. existsByLabel
and create
are simple generic methods responsible for basic database operations on the labels.
[java]
@Profile("development")
@Service
public class DevelopmentBootStrapService extends BootStrapService {
@Autowired
private BookService bookService;
@Override
protected void writeDefaults() {
super.writeDefaults();
entryService.createIfNotExists(BootStrapLabel.CREATE_BOOK, this::createBook);
}
private void createBook() {
bookService.create("Book");
}
}
[/java]
Now we're getting somewhere! If you wondered why I made the BootStrapService
abstract
, now is the answer. I wanted to make it possible to run some code only in a given environment, like development
. The Profile
annotation with environment name as a parameter comes in hand. Overriding the writeDefaults
method provides a way to initialize some data only in development
.
[java]
@Profile("production")
@Service
public class ProductionBootStrapService extends BootStrapService {
}
[/java]
If there is the development
, there may be the production
as well. In the given example, I just wanted to run the default data from the parent class, without filling the environment with some random data.
That's all, my little guide to populate database in Spring. Hopefully, it wasn't THAT bad and even if it was, feel free to leave the feedback nonetheless!
14 May 2018
0
How GitLab helps us move fast at itSilesia
Continuous Integration and Continuous Delivery have taken the world by storm. The nature of our business forces development teams to move quickly and be as efficient as possible, both in regards to standard software development, but also it’s delivery and quality assurance. In order to achieve these goals, we need tools that will enable us to reach them in a simple way. Thankfully, today we have access to a lot of CI/CD solutions, both free and paid, all very different, with different features and different goals in mind. In essence they all do the same thing - allow us to work smarter, not harder.
Our previous development infrastructure consisted of three parts. For issue tracking and Scrum process we used Redmine with Scrum Board plugin. It worked fine and served us great for years, but from today’s perspective it's UI is seriously outdated and generally hard to use. We probably could give it a try to upgrade it, but sadly it is so old that we really do not want to try it. For hosting Git repositories we used Gitolite Redmine integration. It also worked fine, but it's functionality is nonexistent compared to modern solutions, it lacks really basic functionalities like Pull Requests or commenting on commits. For continuous integration we were (and somewhat still are) using Jenkins. Now, don't get me wrong - Jenkins is great and very powerful, requires minimal setup, can be scaled and we achieved some great success with it, especially with use of Groovy Pipelines. But here is the question, if there is a tool (GitLab, if you didn't read the title) that could integrate all of the above, even for some loss of customizability and potential vendor lock-in, is it worth it? Well, let's check it out!
1. Setup
How do we start then? The first decision to make is whether we want to use GitLab.com hosted service, or create our private GitLab instance on company infrastructure. This decision was simple to make - we don't want to store the most valuable company asset (the source code) on public infrastructure, and given the recent problems with GitLab accessibility - this is just a better option. There are multiple ways of installing GitLab, but the easiest way for us was using prebuilt "batteries included" Docker image, which contains all services in one big bundle. This way of packaging multiple applications into one monolithic Docker image is a bit controversial in the community (the best practice is to join individual processes running in separate containers using Docker Compose, for example), but in my opinion it works great for such big and complex software packages, because all the internal complexity is completely hidden away. As far as installation is concerned, we already have a few Docker hosts, so running an additional container somewhere isn’t a big deal.
Documentation specifies that the following Docker invocation is a good start:
[code]
sudo docker run --detach \
--hostname gitlab.example.com \
--publish 443:443 --publish 80:80 --publish 22:22 \
--name gitlab \
--restart always \
--volume /srv/gitlab/config:/etc/gitlab \
--volume /srv/gitlab/logs:/var/log/gitlab \
--volume /srv/gitlab/data:/var/opt/gitlab \
gitlab/gitlab-ce:latest
[/code]
As with everything shipped in a form of a Docker image, we could just blindly copy and paste it into a privileged console... but let's slow down. Do we really understand what these parameters mean? First of all, on most Linux systems port 22 is already used by SSH server, so another binding on port 22 will surely conflict. SSH port is used by GitLab to run it's internal SSH Git server. Since we are perfectly happy with using only HTTPS as before, we can remove this binding altogether. Also, in our case (and in most cases on typical production systems) HTTP(S) ports (80 and 443) are already taken by some Apache or Nginx web server running natively. Since we wanted to use our external Apache proxy (which was also doing SSL termination) on the company edge router, we had to change HTTP port binding to some other random-ish value. Also, we can remove the hostname, it seems not to affect anything.
After a few minutes of some heavy internal provisioning and initialization, you can visit the main page. It will ask for an administrator password first, then it will allow to log in. I have to admit that I was very pleasantly surprised by how easy this setup process was.
I will not get deep into typical administrative stuff. As a first thing you probably want to create users, assign them to groups, create repositories and hand out permissions. It's not really interesting to be honest.
2. Projects
The next topic is migrating projects. Since Git is a decentralized system, it doesn't really matter how many repository servers are used for a single project. However, even though we could work this way (and technically we are), we generally don't think in a decentralized way - typically there is just one central repository. This becomes a challenge, when you want to migrate existing projects when developers are working on them at the same time. The first step is fairly easy - open a console (you are using Git from console, right?), create a new Git remote, and then push all branches to newly created GitLab repository. In the following examples I'll use a very simple Spring Boot application (you can find it here).
[code]
cd ~/luckynumber
git remote add gitlab \
https://gitlab.gliwice.itsilesia.com/agrzybowski/luckynumber.git
git push -u gitlab --all
git push -u gitlab --tags
[/code]
After some uploading, you will end up with two identical repositories.
And...
The issue is that now we have 2 remotes, which means that if you previously had branch develop (and it's remote tracking branch counterpart origin/develop), now there is a new tracking branch gitlab/develop. And that's only on your local repository - other team members can't know (by definition of decentralized model) that there is another remote somewhere. There are two ways of dealing with this. The easiest way is to go around the office and yell "Guys, please copy your files, delete the project and reclone from scratch". And this might work, but expect a lot of hasty copy-pasting and useless bulk commits after this procedure.
There is of course a better way. Git has an option to simply change the remote URL. Send your coworkers a Slack message to run this:
[code]
cd ~/luckynumber
git remote set-url \
https://gitlab.gliwice.itsilesia.com/agrzybowski/luckynumber.git
[/code]
Be careful, though - all remote branch references (origin/xxx) will immediately get out of sync, so push their local counterparts as soon as possible. It’s a good idea to treat remote branches as volatile and not get too emotionally attached to them, because they can be overwritten at any time (remember, git rebase is your best friend, if used correctly). If they are important, don’t use them - just creating a local branch pointing to some commit will make it persistent forever.
3. The interesting part
Now let's focus on CI/CD part. Glancing over the docs, the first thing that pops up is that GitLab CI is very tightly integrated to the source code repository. All configuration (in form of YAML file) is stored directly in the repository, jobs are based on branches and triggered on pushes, pull requests have little red/green checkmarks with build statuses and so on. Is this a good idea altogether? In my opinion it is, but only for the CI part, like running tests or collecting code quality metrics. Deployment (especially on production environments) should be decoupled from any version control system the application happens to use. But since we can get away with this in our internal projects (we have relatively little risk and hopefully responsible developers), we decided to go 100% GitLab and do our production deployments also there.
GitLab CI configuration is stored in .gitlab-ci.yml file. It has a specific structure, which we will not cover in full here. The documentation is very comprehensive, and you can also find some examples online. We will be building an almost barebones Spring Boot application with a few unit tests.
[code]
image: openjdk:8-jdk
stages:
- test
- package
- deploy
test:
stage: test
script:
- ./gradlew test
package:
stage: package
only:
- master
- develop
script:
- ./gradlew bootRepackage
artifacts:
paths:
- build/libs/luckynumber-0.0.1-SNAPSHOT.jar
deploy-test:
stage: deploy
only:
- develop
script:
- ./deploy-test.sh
deploy-prod:
stage: deploy
only:
- master
script:
- ./deploy-prod.sh
[/code]
There is a lot of stuff going on here, let's break it down. First of all, the recommended way of building/testing/packaging your application is by using ephemeral Docker containers for creating temporary and 100% reproducible build environments. We went through a lot of pain on traditional Jenkins because of conflicting JVM versions, badly installed Grails distributions or even globally installed NPM packages (which "no one remembers installing"...). Using Docker containers removes this problem completely - every time a build is triggered there is a guaranteed fresh environment available. Here we specify OpenJDK public image, but you can use any, even your own. After setting up the image name, we define build stages. We use very typical steps - build/package/deploy, but they are of course arbitrary. For each stage we can define:
- branch constraints - for example, deploy to production only from master,
- artifacts - they are preserved across stages and stored internally in GitLab for later download,
- a Bash script - defining what to actually do.



5 March 2018
0
Star Wars opening crawl based on CSS animations and transformations
Ten years ago, nobody could have predicted just how far front-end development would come. CSS was used to style elements on a page, but complex tasks such as animations and transformations had to be handled with JavaScript or the popular jQuery framework. Fast forward to the present day and, thanks to advancements in web development, there are now many more options available. One such example is the use of 3D animation from https://www.fuseanimation.com/3d-animation-studio/ site which offers a wealth of services, from 3D character modeling to motion graphics, to enable developers to create stunning 3D animations for web and mobile applications.
Currently, with CSS 3, HTML 5 and tons of Javascript frameworks for every possible use, we are in a totally different programming world. CSS has immense ecosystem of styles, which help us create, color, filter, transform or even animate objects on a screen. In this article I want to show a simple way to create animated Star Wars opening crawl using only HTML and CSS. There are a couple of methods to achieve this goal but we will try to choose the one that has the best performance. You may ask: "Why Star Wars opening crawl? There are so many interesting topics to choose from." Yeah, it's true but this task is relatively easy to implement and, what is more important, we need the most important CSS styles for animations and transformations to achieve it. And I love Star Wars.
1. Setup
At first, we have to prepare HTML and basic CSS for displaying opening crawl. The only thing we need in HTML body is this simple piece of code:
[html]
<p>Here put your opening crawl story</p>
[/html]
Now we need to apply basic styles to the background of our scene and to the crawl's content. Original Star Wars crawl contains blackish background and yellow text sliding on a screen. This can be done simply by adding this CSS code:
[css]
body { background: #111; }
p {
margin: 0 auto;
color: #fcd000;
font-size: 30px;
font-weight: bold;
width: 600px;
}
[/css]
Nothing special - just yellow text on a black background, you probably did it in your primary school. Fortunately, now we are going to use all the magic of CSS to make it ride.
2. Transformations
As you can see on the featured photo, our text must be leaned. To make it work we need one simple but very powerful style - transform. It prpvides us a lot of transformation functions but the most important are these, which are connected with translation, rotation and scaling. Currently, on most browsers we can use it in 3 dimensions (x, y, z).
In our case try to imagine that you put our <p> text into XYZ plot. We want to lean this text forward, in mathematical words, rotate around X axis.
And this is also pretty easy to do with CSS - just add this chunk of code to <p>:
[css]transform: rotateX(30deg)[/css]
If you try to write the code along with me, you can see that this is certainly not what we expected. It looks more like shrinking than rotating with perspective. Why? The keyword here is the perspective. We didn't declare this perspective, so browser did it for us and set perspective to none. As a result, there isn't any visible perspective. How can we declare a perspective? There are two methods which are a little bit different. We can attach perspective to the transformed object or a parent of this object. In that case there is no difference but it's extremely important to use the second approach if you want to transform more than one object. If you attach perspective to the parent, all children will belong to one perspective. And this is how it works in real life! On the other hand, you may find situations where "perspective per object" approach is better - mostly when you don't need realistic behaviour but only nice looking effect.
In our task we will use parent's perspective. To do this we have to put the code below into our styles (in our case these are <body> styles). For the sake of testing try this value:
[css]perspective: 50px[/css]
I must be honest, for a long time (waaaaay too long) I was writing random pixels' value into this property and checking if it's correct. But inside, I've always wanted to understand what these pixels mean. Nowadays, my beard is thicker, I switched from tea to coffee and finally I understand perspective property, yay! MDN documentation explains, that this value is a distance between user's eyes and z = 0 value on invisible plot. I will try to explain it more clearly. If you make perspective value very small, it looks like you are standing very close to the origin of object. Vice versa - if you provide really big value, you see an object from a really far distance. With this knowledge you can estimate what value you should type here. In our case our block of text is 600 pixels wide. We want to get the effect of seeing the crawl very close, so we should consider value, which is much lower than this 600 pixels. In our test we used 50 pixels but we can see that we "are standing" almost inside the block of text, so we should increase it to at least 150 pixels.
That should create a pretty nice effect of static Star Wars opening crawl.
[css]
body {
background: #111;
perspective: 200px;
}
p {
font-size: 30px;
color: #fcd000;
font-weight: bold;
width: 600px;
margin: 0 auto;
transform: rotateX(30deg);
}
[/css]
3. Animation
We've got the text on a screen, it's leaned as we wanted but we still need to make it move. At this moment we have to decide which way we choose - and this will result in how efficient our code will be. Practically, we have three significant methods to do this - 1. margin-top property, 2. position absolute/fixed with top property or 3. transform property using translate relative to Y axis.
Modifying margin is the worst choice. Seriously, never ever do that! Margin property was never created by browsers developers to use it in complex animations. Okay, in this simple example of one animated block of text, it's possible that you won't see any difference. But try to animate hundreds of objects on a screen with margin and you will see that you should forget about this method. And if you need just a simple animation it's still not a good choice. Why would you use something worse if you have something better? Speaking about something better..
..top property. Certainly, it is a better option, but still not the best. In the current frontend world there are many popular applications which use top/left properties to animate movement on scrollbars, sliders or animated dropdowns. But these examples are relatively simple tasks for our powerful devices (don't tell me your 2014 smartphone is not powerful, it is powerful enough). It has one advantage over next method - it's more compatible so if you need an effect which works on older browsers maybe it's better.
For complex visualizations or simple games using many animations you should always choose transform over margin or positioning. Browsers have different approaches to render DOM but mostly they use a different way to render elements transformed with transform property. For example, on Chrome, elements are taken into its own layer of GPU (RenderLayer), what causes frames to be drawn quicker. If you are interested in understanding it more precisely, try to make a simple animation with position absolute and check "Performance" section in Chrome Developer Tools. Then do the same with transform property. You will see differences in rendering times and GPU layers used to render it.
Okay, so we all agree that in that case transform is the best choice. We will stay with the transform property because it will be rendered on a separated GPU layer and we will create a crazy fast opening crawl animation. But you may ask: "Where is this animation mechanism?" Transform on its own just sets specific translation, rotation or scale, STATICALLY, nothing more. To make it move we need animation property - next fancy CSS feature. And this is quite tricky and complex feature. With animation property we can define duration, delay, name, iteration count, direction, even timing function of animation! But to simplify, we will leave most of these properties default. Without more ado, let's make it moooove. Add this code into the <p> styles:
[css]animation: animateCrawl 20s[/css]
We declared that this block will use animation called "animateCrawl" and it will take 20 seconds. But we still didn't define this animation and to do this we need to fill our animation's keyframes.
[css]
@keyframes animateCrawl {
0% { transform: rotateX(30deg) translateY(400px); }
100% { transform: rotateX(30deg) translateY(-300px); }
}
[/css]
For those who don't understand what does this syntax mean - first of all we need to enter keyframes expressed in percents. Then we define object's styles in these moments. Animation mechanism will automagically interpolate the difference in time what causes the animation effect. In result, our block of text will move according to Y axis, from the bottom to the top.
If you are observant, you may see that I also added rotateX value into transform property, which was added before. At the beginning of my frontend adventure I was asking myself: "Why can't I use only translation here if I want to animate only translation?". Answer is simple - the consequence. CSS treats every style as a pair - property name and value. Transform is just a single property name, so if you want to make a few transformations, you must put them all into this transform style. If you type only translateY transformation, you will simply override previously added rotateX with a default value. Not very comfortable but we must accept what CSS gives us.
If you launched this animation, you may see a small problem - at the beginning the movement is really fast and it slows down after every frame. From physical perspective it is done properly but original Star Wars opening crawl was moving uniformly. To fix it we need to change animation timing function. It gives us a lot of freedom, we can even define our own cubic bezier function! To simplify we will use built-in timing function called "ease-in". Mechanism of "ease-in" is pretty easy to understand - every frame animation is faster than frame before. And this is exactly what we need. Perspective gives an impression of slowing down, "ease-in" function accelerates in time so in result we get steady movement. Of course, if you want to make it very precisely, you should consider using custom cubic bezier function.
[css]animation: animateCrawl 20s ease-in;[/css]
And it's done. Star Wars opening crawl. Now you can use this knowledge for various CSS animations.
If you want to see working solution, check this codepen: https://codepen.io/Mossar/pen/rGpWqX
17 November 2017
0
Refreshed image
A polish classic says: "sit deeply in an armchair, tighten your belts and start". We do not have a ribbon to cut, so another chapter in company's history will be immortalized by an inaugural entry on a blog. itSilesia has metamorphosed and returns with a new image. We performed fundamental changes - rebranding met a building renovation and a new blood in our team. We count on that the new website and company materials' design will simplify access to any information you need.
31 October 2014
0
0