Saturday, December 20, 2014

Deep Learning: Speech Technology - Voice2Text

Artificial Intelligence (AI) expert predicts half of web searches will soon be speech and images. In future, we might speak and search would be done by voice rather by typing words. Large groups of computers in the cloud can now understand the words and sentences we dictate into. By 2020, probably 50 percent of queries will be through speech or images and more of DEEP TECHNOLOGY involvement.

I use speech recognition engine OPEN SOURCE tools (viz. Julius & HTK) to support the cause, so PING me if you need me to enforce this new technology..!!

Saturday, November 1, 2014

BigData: The NEED to have real-time "Customer & Market Analytics".

Language technologies — text, speech, and social analytics — natural language processing and semantic analysis — are the key to understanding consumer, market, and public voices. It's a need to apply them to extract the full measure of business value from social and online media, customer interactions and other enterprise data, scientific and financial information, and a spectrum of other sources.

BigData is one of the greatest challenges for any organizations is Voice of the Customer (VoC), undoubtedly emerging as a key force realigning marketing, public relations, customer service, and every element of the supply chain, for enterprises around the world. Organizations that understand the importance of VoC are prioritizing the need to engage customers more effectively and to monitor what customers are saying for business intelligence, responsiveness, and planning.

While organizations have invested heavily in setting up customer support desks or call centers but very little has been done to process the data captured from these calls. Most organizations basically rely on the efficiency of the support person to decipher the data and convert it into needed information. Converting voice data into text which can be stored and analyzed further is a challenge that the technology world is experiencing today.

This real time voice-to-text data contains factual, qualilative and subjective information. These data are diligent to be analyzed as it contains voluminous and unstructured in nature. Also, having strongest and most accurate NLP technology to carry our sentiment analysis of these data is very much essential.

I am a real data dog and would look forward for people interested to reach me for voice-to-text analytics project works, seminars, conferences, and talk sessions. I do dream to be the leading provider of text analytics softwares both for customer experience management (CEM) and customer insights ubiquitous (CIU).

Please do feel free to PING me through below E-mail. Thanks ..!!

Tuesday, May 31, 2011

C++ into synthesisable Verilog HDL (a.k.a. NANDGGIR)

Working to develop 'NANDGGIR', a 'C++ into synthesisable Verilog' software tool. The actual process of conversion involves various program transformations, using LLVM-Clang compiler project. The work and challenges involved as a Compiler Developer is to work independently in bringing this novel, non-obvious and useful product named 'NANDGGIR' as a 'C++ to Verilog' software tool. This isn't a trivial work to be completed in short time and it's expected sometime by mid 2013.

Wednesday, March 10, 2010

Static-Analysis: Parallel Programs

This year (2010) one of my goals is to try implementing "Static-Analysis: Parallel Programs" as a program verification tool for message passing interface programs. It's implementation is a deep topic, so obviously the disclosure as a product would be in brief but not a detailed description of passes developed.

Tuesday, December 22, 2009

Multicore Review: Best Multicore Posts of 2009

MulticoreInfo.com has published 1800 posts in 2009 linking to useful resources of multicore related information. Among those, my acticle on Auto-Vectorization has been within top 10 posts as declared by MulticoreInfo.com. Appreciate all readers in making my article in top 10 among 1800 as posted on MulticoreInfo.com in 2009.

Please check this link.

Friday, November 20, 2009

Static-Analysis: Parallel programs

Currently, static analysis for parallel programs doesn't exist neither as a commercial nor as a open-source product. Parallel programs communicate through nodes using Message Passing Interface (MPI) libraries and all MPI implementations are libraries including binary objects. As a consequence, when a compiler translates and optimizes a parallel application with calls to MPI, it will treat the MPI calls as a black box and avoid optimizing the calls and in some cases the code that surrounds them. As compiler cannot be constructed to communicate based on node-to-node communication, we need some static-analysis tools, which can analyze both warnings and defects at parallel programming level.

There are debuggers like - Marmot, ISP, Total View, DDT, etc., which analyses run-time errors like deadlocks, race-conditions, correct construction and destruction of resources, issue warnings in the case of non-portable constructs, MPI API usability correctness (i.e. verification of arguments such as tags, communicators, ranks, etc.) and many more but analyzing parallel programs source code statically, are totally absent.

The concepts and need of static analysis for parallel programming as a software tool, a product for computer scientists of HPC scientific-applications community will be shared soon as a part of information sharing.

Wednesday, October 21, 2009

GCC Internals

GCC Internals has been designed w.r.t v-4.3 in 2007. Complete sinerio of GCC Front-End, Middle-End and Back-End has been discussed. It does serve the purpose of giving an outline of GCC Internals.

Vectorization

This article is posted on Intel Software Network (ISN) forum. This is something normally Compiler developers try to promote – how C/C++ source code can be vectorized by a compiler. I have emphasized in this paper that compared to rewriting code with assembly code, with the approach as described with an example here, one can get code that is (i) easier to read and understand (ii) easier to maintain and (iii) still get optimal performance.