[Deep.In. Interview] Looking for endless creativity with AI Human.

Let me introduce you to the honest story behind Deep. In. in DeepBrain AI
*DeepBrain AI cherishes colleagues who work together. Deep.In. is DeepBrain AI employee’s nickname is to appreciate the strong fellowship we have between employees.

English name : Deep. In.
Korean name : 딥.인.
Internatioal name : Deep.人.

 

Q. Please introduce yourself.
Hello, I'm Park Young-hoon, the creative part of Deep Brain AI.

Q. What are you doing in the creative team?
Well, I'm doing a lot of work. From video content to image content used in marketing, And sales contents used by each team or images used for internal documents.
It’s safe to say, almost all the contents generated by DeepBrain AI goes through our team.
There are so many that I can't even get around it

 

Q. Who's in the creative team?
Yes, since we're a company that mainly focused on video synthesis, There are many people related to video synthesis or video contents.
There are people who are in charge of filming. There are people who specialize in editing videos. And there's an art director who works on images and so on.
There are quite a few people who have been working mainly in advertising agencies and broadcasting.

Q. How’s the work culture of the creative team?
Freedom and freedom. Just as Deep Brain AI pursues an open culture, Our team is also pursuing an open culture.
I take a nap or play games during breaks. To the extent that I don't deviate from my responsibility, If you act responsibly, no one bets an eye.
I think being free is our team's culture.

 

Q. Which welfare system are you most satisfied with?
Personally, I think it's food.
It's served at the company until morning, lunch, and dinner. Snacks and coffee that are served in the middle too...
What the CEO’s point is that if you work hard, the company will take responsibility for eating. I think I understand what he means.
So I'd pick welfare related to eating among various welfare programs.

Q. What's the strength of DeepBrain AI?
No matter how well we wrap up the brand and advertise it, In the end, I think it is the essence of the brand that appeals to consumers.
I think the essence of DeepBrain AI is technology.
I think this technology is the biggest strength of Deep Brain AI.

 

Q. Why did you choose DeepBrain AI?
I think it's an answer in the continuity of the previous question.
I believe DeepBrain AI's technology is the biggest strength.
I thought that in a company with such technology, I would like to perform my skills to the fullest.

Q. What is your goal, P.Kyle?
First of all, I hope that DeepBrain AI itself becomes a company that is well-known to many people and well-received.
I'm going to work hard to create a company to be so, too. Of course, with our company's partners and employees,
My goal is to enjoy sharing these dreams and work hard.
So thats why I want to be recognized here.


KoDF: A Large-scale Korean Deepfake Detection Dataset

A variety of effective face-swap and face-reenactment methods have been publicized in recent years, democratizing the face synthesis technology to a great extent. Videos generated as such have come to be collectively called deepfakes with a negative connotation, for various social problems they have caused.

Modulation data is required to develop algorithms that detect these deep-fake contents however currently disclosed data (FaceForsics++, DeepFaceLab, DFDC, etc.) are biased toward Caucasian faces, lack diversity and transparency in modulation models, and some do not disclose synthetic technologies.

DeepBrain AI has established a Korean deepfake detection dataset (KoDF) to solve this problem.
KoDF is a "Deepfake Detection Dataset" that includes 175,776 fake clips and 62,166 real clips for 403 subjects.

 

Table 1. Quantitative comparison of KoDF to existing public deepfake detection datasets

The deepfake samples were generated with 6 different synthesis models. We employed six different models to generate deepfake sample clips. Among them, FaceSwap, DeepFaceLab, and FSGAN were face swapping models. First Order Motion Model (FOMM) was a video-driven face reenactment model. The remaining two are audio-driven face-reenactment models, Audio-driven Talking Face Head Pose (ATFHP) and Wav2Lip.

 

Figure-1. Synthetic model ratio of KoDF

Postprocessing
All the methods listed above produce a sequence of image frames matched to the facial region cropped during the preprocessing step. As most models fail to reconstruct accurate details around the facial boundaries, necessitated is the process of blending the synthesized outcome back into the original frame.

Using the same facial landmark detection algorithm from the preprocessing stage, we create a facial mask from the synthesized image frame. The border of the mask region goes under a Gaussian blurring process to reduce the artifacts, and the blurred images are blended into the original video frames of corresponding temporal positions.
This postprocessing procedure significantly reduces jitters while preserving details around the facial borders.

 

We evaluate the overall quality of the synthesized output with Structural Similarity Index Measure (SSIM) and Average Keypoint Distance (AKD). The former compares the structural similarity between the target clip and the generated video, and the latter represents the accuracy of the synthesized clips’ facial expressions given the target video as a ground truth.

 

Table 2.  Comparison of FF++ and KoDF by average SSIM and AKD.

 

To evaluate the overall quality of the database, we randomly choose 500 real clips and 500 corresponding synthesized clips. From each of the fake samples, 100 frames are uniformly extracted, and their real matches are taken from the identical temporal positions. For these 100 pairs, SSIM and AKD are computed and averaged to induce the final values, summarized in Table 2.

The ultimate goal of a deepfake detection dataset would be to help develop a general detection model that performs well against a variety of real-world deepfake cases. Most of studies on deepfake detection are designed so as to measure how their proposed detection models perform based on a certain deepfake detection dataset. The premise here is that the target deepfake detection dataset is a good approximation of the distribution of real-world deepfake instances.

In the subsequent experiments, we investigate if existing deepfake detection datasets meet the aforementioned assumption. (i.e. guaranteeing a sufficient level of generality)

 

ROC curves of the DFDC winning detection model. The model is trained on FF++, DFDC, KoDF, and their union respectively, and then evaluated on each of the three single datasets.

From the results of experiment, we can deduce that the deepfake detection task is strongly prone to overfitting, much more so than regular image classification tasks where models learn diverse signals recurring naturally (i.e. local patterns and global structures). On the other hand, deepfake detection models focus on artifacts arising during the generation process, which inevitably vary depending on the synthesis methodologies. An ideal deepfake detection dataset should thus incorporate examples of a maximal variety of deepfake methods and a wide range of real videos. No standalone deepfake dataset published so far seems to achieve sufficient generality to meet these conditions on its own, and a practical solution would be to utilize multiple datasets adjoined.)

In conclusion we have presented a new large-scale dataset to help researchers develop and evaluate deepfake detection methods. KoDF focuses on Korean subjects, a demographic that tends to be underrepresented in other major deepfake detection databases. It expands the range of employed deepfake methods, regulates the quality of the real clips and the synthesized clips, manages the distribution of subjects according to age, sex, and speech content, and simulates possible adversarial attacks. While KoDF is an extensive database, our expectation is that it will work even more effectively in the mutual complementation of existing and future deepfake detection databases, including the two milestone datasets FF++ and DFDC. We experimentally demonstrate the benefit of compositing datasets for in-the-wild deepfake detection. We hope KoDF to serve as a stepping stone for future studies in the field of deepfake detection.)

※For more information on the above, please refer to the thesis below and the published KoDF..

※Follwing content was written based on a paper published by Deepbrain AI (KoDF: Large-scale Korean DeepFake Detection Dataset, https://arxiv.org/abs/2103.10094)), selected by the 2021 International Society of Computer Vision (ICCV).

※The Korean Deepfake Modulated Image Dataset (KoDF), built by Deepbrain AI, was released to AI HUB operated by the National Intelligence Agency (NIA) for research purposes. (https://aihub.or.kr/aidata/8005)

※This development was developed with the support of the 2020 AI learning data construction project. (Participating organization: Seoul National University, Crowdworks)


DeepBrain AI - Named as CES 2022 Innovation Awards Winner

▶ AI Human video synthesis platform 'AI STUDIOS' by DeepBrain AI won CES Innovation Award 2022 in Streaming category.
▶ DeepBrain AI is coming in January to demonstrate its innovative technology in person at CES Las Vegas and NRF New York.

Seoul, South Korea, November 11th, 2021 – DeepBrain AI is proud to announce that it has been named a CES® 2022 Innovation Awards Honoree for its AI Studios script to video solution in ‘Streaming’ category. This year’s CES Innovation Awards program received a record high number of over 1800 submissions. The announcement was made ahead of CES 2022, the world’s most influential technology event, happening Jan. 5-8 in Las Vegas, NV and digitally.

The CES Innovation Awards program, owned and produced by the Consumer Technology Association (CTA)®, is an annual competition honoring outstanding design and engineering in 27 consumer technology product categories. An elite panel of industry expert judges, including members of the media, designers, engineers and more, reviewed submissions based on innovation, engineering and functionality, aesthetic and design.

 

 

AI Studios is a video production tool that makes it easy to produce videos without the need to film or employ the real people. By combining DeepBrain's AI Studios in the form of SaaS and a computer, it's possible to eliminate costly studio, lighting, camera, set staff, and even the video host. Just by typing the script, this enables the AI anchor to naturally speak, use body language and gestures just like a real presenter. Individuals, small & big entities can produce various content videos, such as YouTube, corporate training and news, all without equipment or expertise.

DeepBrain AI will showcase its AI Human technology including its winning product in person at the upcoming CES 2022 conference in Las Vegas. The company is also sponsoring the following NRF 2022 conference held in New York, where it will showcase its technology application in retail industry.

The detailed AI Studios product description and photos can be found at CES.tech/innovation. To schedule a meeting with DeepBrain AI at the upcoming conferences please contact Global Team at global@deepbrainai.io

 

Read More >>


30% discount on Black Friday to commemorate the global launch of an AI human video synthesis platform, AI STUDIOS.

- User friendly- AI human video production tool. Officially launched after beta in June with secured subscribers of 5,000.
- Affordable monthly subscription product for small to medium-sized enterprises and highly applicable to news and lecture materials.
- 28 models, 4 languages and customizable costumes and background templates.
- Monthly fees from $29 up to $1,999. Capable of producing videos up to 45 hours.

 

DeepBrain AI officially launched 'AISTUDIOS', a software-as-a-service (SaaS) solution based on AI human technology.

AI STUDIOS is a video synthesis and editing platform that allows Artificial Humans to present script as users enter any script for video production.
Since its Beta Service opening in June, a total of 370,000 users have visited, and it has secured around 5,000 subscribers, including Korea, United States, China, and Southeast Asian countries around the world.

If you consider the existing real-time two-way communication AI human solution targets large enterprise companies, AI STUDIOS is a SaaS product in the form of a monthly subscription that allows small to mid-sized companies to utilize AI human technology without much financial burden.

Typical video content production requires a lot of time, resources and man-power such as presenters, film crew to studios and equipment however, utilizing AI STUDIOS enables you to create high-quality presenter videos without additional filming for a fraction of the cost.

For example, local broadcasting stations can produce news hosted by AI humans on behalf of announcers, and educational institutions such as schools and academies can produce lecture materials instead of teachers.
Currently, AI STUDIOS is being used by LG Innotek, Gwangju National University of Education, Gyeonggi Provincial Office of Education, and the Korea Crowd Funding Association in Korea.

Internationally it is used by companies and institutions such as AssetWorks, a U.S. asset management company, China Broadcasting Station, Chinese video platform Yunmeishe, Chinese real estate operation group YinTech, and Myanmar-Korean Society News ADSHOFAR.

AI STUDIOS provides a total of 28 AI models (11 in Korean, 12 in English, 4 in Chinese, and 1 in Japanese) as well as various costumes and background templates.

If you want utilize a more popular model, we have expanded your choices by adding premium features that are available at affordable rate.  AI models are expected to be added continuously, and a completely virtual-faced AI humans that does not come from an actual human are also in making.

 

AI STUDIOS offer three plans: Starter Plan, Standard Plan, and Pro Plan. The Starter Plan is designed for beginners and is offered at monthly rate of $29 and provides total of 20 minutes of video production.

The Standard Plan offers total of 20 hours of video production for $999 and The Pro Plan offers total of 45 hours of video production for $1,999 per month. Regardless of the plan all copyrights of video contents created belongs to the users.

DeepBrain AI will hold a Black Friday discount promotion to commemorate the launch of AI STUDIOS. The promotion will last until the 30th, and offers up to 30% discounts on service fee to AI Model fees.

 

Read More >>


DeepBrain AI implements Football Star, Ronaldo with video synthesis technology on collaboration with global advertising agency.

https://youtu.be/sruZBzw0o4E

”Market Expansion to Advertising Market with AI Video Synthesis Technology ”

DeepBrain AI participated in the production of advertisements that implemented football player Cristiano Ronaldo with AI video synthesis technology.

The advertisement, produced by DeepBrain AI and global advertising agency IDCREATION, is a corporate promotional content of football-based online community platform company "ZujuGP" and was introduced through YouTube and Instagram.

Cinematic representation of Ronaldo as a ninja instead of football player confronting against group of enemy drew attention as it became known that AI technology was used in the production process. This is because Ronaldo, who appears in the video, was implemented through AI image synthesis technology.

 

DeepBrain AI has applied "Face Swap" technology, a kind of deepfake technology, to realize Ronaldo's natural facial expressions and movements. In the advertisement, Ronaldo shows martial arts against ninjas and shows techniques such as somersaulting or crossing buildings, and the technology was applied to Ronaldo's face as soon as he took off his mask.

 

Face swap technology is a technology that changes the face of a person in the original video or image to another face, and only the identity of the person changes while the facial expression and mouth shape remain the same. In other words, in the case of this advertisement, Ronaldo did not record anything, and Ronaldo's face was synthesized using face swap technology in the second half of the work after the actor in the band shot.

 

Recently, deepfake has been used as a means of various crimes and negative perceptions exist throughout society, but the potential of AI video synthesis technology in entertainment areas such as advertisements, broadcasting, and movies is endless.

DeepBrain AI realized the potential of AI video synthesis technology expansion to media content production through this project, and plans to diversify its business area by utilizing deep learning artificial intelligence technology in various fields and forms.

Read More >>


ICT Digital New Deal Best-Case Award goes to DeepBrain AI.

The Ministry of Science and ICT along with the Korea Communications Agency promoted the ICT fund project this year.
50 companies were selected as best-cases of the ICT fund project and Deep Brain AI was included in the list.
DeepBrain AI was awarded the Korea Communications Agency Director Award for its achievements in the ICT digital new deal field.

 

DeepBrain AI participated as a supplier in the AI voucher support project organized by the Information and Communication Industry Promotion Agency.
The project was to develop an AI Human advertising system for (distribution) stores, and has laid the groundwork for demanding companies to deliver electronic price indicators that apply AI Human solutions to large overseas distributors.
DeepBrain AI has seen its opportunities in market for retail and e-commerce industries.

DeepBrain AI will continue to grow and develop for better applications in retail and e-commerce market.

 

Read More New >>


DeepBrain AI wins GITEX 2021 after competing with 700 global start-ups from 36 countries!

DeepBrain AI participated in GITEX 2021, the largest IT fair in the Middle East, held in Dubai, and won the Future Stars Super Nova Challenge during the event.

GITEX is a historic event that marks its 41st anniversary this year, and IT companies, related investors, and government agencies in fields such as AI, FinTech, Blockchain, Healthcare, Smart City, and IoT participate every year to expand global businesses by showing off IT technology and business models and discovering partners. This year, about 140,000 people from 144 countries participated at the Dubai World Trade Center for five days from the 17th.

 

DeepBrain AI won the final title at the "Future Stars Super Nova Challenge," the main program of the event. The program is a start-up pitching competition where companies give competitive presentations introducing technology and business items to VC, investment companies, and partners. In this challenge, a total of 700 companies from 36 countries around the world participated, 120 companies advanced to the semi-finals after the first qualifying competition. Since then, 22 companies have moved on to the final stage, and DeepBrain AI has won the final competition.

DeepBrain AI received $55,000 as the final prize for the challenge and also recieved a pitching opportunity to receive $500,000 worth of investment from Tim Drapper, a legendary investor in Silicon Valley.

 

DeepBrain AI emphasized the marketability and technological innovation of companies by introducing actual application cases in various fields such as finance, education, broadcasting, and retail, focusing on AI human solutions that create virtual humans through deep-learning artificial intelligence technology. As a result, they all received high scores in terms of technology's originality, potential, and expandability.

 

In addition, DeepBrain AI operated a corporate booth during the event and used it as an opportunity for global promotion. Active counseling was conducted by responding to various inquiries about companies, such as guiding various service uses of AI human technology. Through this, DeepBrain AI exchanged positive discussions on technology alliances and investment attraction of various global companies and institutions, and is not expecting positive results in the future.

 

Read More New >>


What is DeepFake Technology?

 

Recently, as it was revealed that actor Tom Cruise was also a candidate for the Iron Man, some scenes in the movie with Tom Cruise’s face instead of Ironman became a hot topic. It is a transformation of the face in the video with deep fake technology.

Deepfake is a compound word of deep learning and fake, and refers to a fake image or video that is difficult to distinguish authenticity using deep learning technology. Deepfake generally has a negative perception of 'fake', but it is one of the deep learning technologies.

 

Today, we'll look at some cases of deepfake abuse and some positive cases.

 

A case where deepfake was abused.

There are more and more cases with abuse of deepfake technology on social media, video platforms, and fake news. In particular, it has been used a lot to distribute pornography, making it a social problem. Last year, there was also a case of deepfake damage in the Nth Room incident that caused Korean society to become a hot controversy. A number of videos of illegal pornography videos and the faces of K-pop singers and female celebrities have been distributed.

 

DeepBrain AI's social responsibility for these "Negative deep fake cases"!
Deepfake detection service is open free of charge (www.detectdeepfake.ai)

 

With the development of deepfake technology, various forgery and alteration images are becoming a social problem, but there was no objective analysis tool to check for forgery or alteration.
DeepBrain AI has launched Detect Deepfake AI (Detectdeepfake.ai), hoping that it will minimize the damage caused by deepfake videos and serve as an opportunity for deepfake technology to develop in a positive direction.

 

 

Detect Deepfake AI is a service that verifies the authenticity of the video through deep learning AI analysis-learning of videos suspected of forgery and alteration. In order to develop AI technology that detects and detects modulated images (deepfakes) generated through neural network-based modulation algorithms, we also built and released modulated image data for learning in consideration of the possibility of various detection interference.

This development was developed with the support of the 2020 AI learning data construction project. (Participating organization: Seoul National University, Crowdworks)

 

 

There are also positive cases regarding deepfake.

For cases of DeepBrain AI,

In 21, DeepBrain AI supported KBS 1TV documentary with "Puppet Master" technology to provide natural facial expressions and movements in the appearance of independence activists left only in photos in "Okbaraji, Their Independence Movement," which was specially produced to commemorate the August 15 Korean Independence Day.
We tried to give our descendants and generations an opportunity to reflect on the meaning of history and convey a positive impact by allowing historical figures who have campaigned for independence to breathe alive.

Kim Tae-gyu, an independence activist who was reconstructed with DeepBrain AI's AI image synthesis technology, before and after synthesis.

 

DeepBrain AI recently participated in the production of an advertisement that embodies football player Cristiano Ronaldo with deep fake technology. Deepfake technology is becoming more and more utilized in entertainment fields such as advertisements and movies as it is hard to be distinguished from real people.

Cristiano Ronaldo in commercial with DeepBrain AI technology.(before & After)

 

Another example is the launch of a mobile application (APP) that turns into a variety of facial expressions and movements with a single photo, providing pleasure and attracting attention among overseas users

"Newface" APP with various facial expressions and movements through photos by DeepBrain AI.

 

Another advantage is that it can be used with identification technologies such as finding criminals and blocking terrorists, finding and preventing signs of cancer in advance through data analysis of CT and MRI, and reproducing people who are not with us anymore.

As we have seen from some of the examples above, deepfake technology is not only a negative technology but also a positive aspects depending on its utilization.

p.s: If a person's face, body, or voice are edited, synthesized, filmed, or distributed against the will of the current Deepfake crime, it will be subject to up to five years in prison or a fine of up to 50 million won.

 

 

 


DeepBrain AI’s Deep-Learning-based Video and Voice Synthesis Technology

AI Human is a technology that allows you to naturally express not only voices such as human speech and intonation, but also faces, facial expressions, and movements through video by learning human faces based on deep learning AI technology simply with an input of a text.

Today, we will explain the learning model related to deep learning-based image synthesis and introduce you to DeepBrain AI's AI Human implementation technology.

 

 

1) Main learning technology model

[CNN-Image Classification Algorithm]
It’s a technology that analyzes images by applying shared weights (Filter) with Convolution Neural Networks. Feature refers to data extracted from various features from input.

 

<CNN Architecture>

 

Function of CNN is to classify and recognize images.

[GAN]

Generative Adversarial Networks (GAN) is a hostile neural network deep learning model that repeats learning until it is impossible to distinguish it from the real thing by creating a real "likely fake" at first glance.
After the constructor generates an image from random noise, the discriminator looks at the true image and fake image and determines true/false to learn the constructor.

 

 

 

2) DeepBrain AI’s Original Technology

 

 

<Lip Sync, Face Synthesis Technology>

The Lip Sync method is a technology that controls the speech behavior (mouth shape, jaw movement, neck movement) of an image from a voice by synthesizing the original image so that the shape of the mouth matches a given voice by inputting an arbitrary voice in the video spoken by a particular person. In other words, you can synthesize a person image that speaks as an input of an arbitrary voice and background image.
In order to develop various behavioral patterns according to speech, it is performed by extracting feature vectors from the character's speech image to inform the distribution of behavior patterns, and developing behavioral patterns according to speech by learning feature vectors from speech.

 

<Real-time Video Synthesis Technology>

DeepBrain AI was the first company in the world to succeeded in synthesizing real-time image through the development of process optimization technology. Basically, three major technologies are needed to implement video synthesis that can communicate with customers in real time. The first is the placement technology. To optimize the speed of image synthesis, we developed and applied our own batch processing technology. By simultaneously processing multiple synthesis requests, it is possible to reduce the latency required for image synthesis. Second, it is cache server optimization technology. Since most conversations can be formed into data and retained, questions and conversations that are expected to be used repeatedly are built on the cache server so that video can be transmitted quickly in real time. And lastly, it's Idle Framing technology. The expression is natural while the artificial intelligence model is speaking, but if the user is stationary while speaking, the user can feel very unnatural. To overcome this, the gap can be minimized by giving the user a feeling that they are listening with natural movements while speaking.


[DeepBrain AI Culture] Introducing the corporate culture of DeepBrain AI, an innovative company that is creating artificial humans.

The 4th Industrial Revolution is changing the way we live and work, and the un-tact era have accelerated further with the influence of COVID-19. Even with the end of COVID-19, our daily paradigm can expect to face many changes.
Let me introduce the corporate culture of Deep Brain AI, an innovative company that is creating artificial humans at the center of the 4th industrial revolution.

 

 

DeepBrain AI History in Numbers

DeepBrain AI is an artificial intelligence-based video synthesis company and is one of the top three global companies that has both original technologies for video synthesis and voice synthesis.
DeepBrain AI has grown rapidly in a short period of time since its fifth year since its foundation. It has developed and commercialized artificial human technology, and is growing at a fast pace of 500% every year. This year, it was recognized as 180 million dollars in corporate value by attracting 44 million dollars in Series B investment.

 

DeepBrain AI has been able to grow rapidly due to its unique corporate culture.

 

First, the vision and clear goals for the interactive AI market.

DeepBrain AI's vision is to become a company that provides the benefits of life through artificial intelligence technology.
The quantitative goal is to implement with the highest level of technology in the global market and to be ranked first in market share.
In order to recognize artificial human technology for the public and preoccupy the market, DeepBrain AI executives and employees are making achievements with detailed quantitative goals for each project team and individuals.

 

 

Second, The Agile culture.

DeepBrain AI organizes teams in units of projects (products) and establishes and executes work plans in units of short units to quickly plan and launch new services.
One project consists of planners, developers, marketers, and designers, including leaders necessary to perform their tasks.
Repeating the cycle depending on the results to improve the project and produce the best results.
According to the performance, we reward the team and individuals correspondingly to continuously provide motivation.
We also pursue a horizontal organizational culture that uses English names without any positions or titles. Through this, we create conditions to respect autonomy and maximize individual abilities.

 

 

Third, Culture that values employees and their families and benefits

DeepBrain AI utilizes English names without any position or titles to create horizontal organizational culture that respects individual autonomy.
We try to provide pleasant environment not only for our employees but for the family and loved ones though various programs.
Self-development expenses to support individual hobbies and growth, medical expenses to provide individual treatments, Family Day to provide more time with your family, thank you gift for parents on special occasions to show our gratitude, etc.
More detailed benefits to be introduced on following posts.

 

Fourth, Outstanding Results with Outstanding compensation and treatment.

With the recent Series B investment, all employees have received an annual salary raise up to 17 thousand dollars.
DeepBrain AI speeds up reasonable compensation just as it Agilely pursues on projects.
Performance based annual salary negotiations are conducted through a quarterly evaluation.
Clarified compensation for each and every project team and individual's performance.
Depending on quarterly team and individual performance, incentives such as 85 thousand dollars in cash and 85 thousand dollars in stock options will be implemented to provide strong motivation.

 

*The company introduction and employee introduction will continue.