Related Content
Breaking Down Apache’s Hadoop Distributed File System Apache Hadoop is a framework for big data. One of its main components is HDFS, Hadoop Distributed File System, which stores that data. You might expect that a storage framework that holds large quantities of data requires state-of-the-art infrastructure for a file system that does not fail, but quite the contrary is true. |
||
Lessons the Software Community Must Take from the Pandemic Due to COVID-19, organizations of all types have had to implement continuity plans within an unreasonably short amount of time. These live experiments in agility have shaken up our industry, but it's also taught us a lot of invaluable lessons about digital transformation, cybersecurity, performance engineering, and more. |
||
Comparing Apache Hadoop Data Storage Formats Apache Hadoop can store data in several supported file formats. To decide which one you should use, analyze their properties and the type of data you want to store. Let's look at query time, data serialization, whether the file format is splittable, and whether it supports compression, then review some common use cases. |
||
5 Pitfalls to Avoid When Developing AI Tools Developing a tool that runs on artificial intelligence is mostly about training a machine with data. But you can’t just feed it information and expect AI to wave a magic wand and produce results. The type of data sets you use and how you use them to train the tool are important. Here are five pitfalls to be wary of. |
||
Benefits of Using Columnar Storage in Relational Database Management Systems Relational database management systems (RDBMS) store data in rows and columns. Most relational databases store data row-wise by default, but a few RDBMS provide the option to store data column-wise, which is a useful feature. Let’s look at the benefits of being able to use columnar storage for data and when you'd want to. |
||
Choosing the Right Threat Modeling Methodology Threat modeling has transitioned from a theoretical concept into an IT security best practice. Choosing the right methodology is a combination of finding what works for your SDLC maturity and ensuring it results in the desired outputs. Let’s look at four different methodologies and assess their strengths and weaknesses. |
||
Comparing Apache Sqoop, Flume, and Kafka Apache Sqoop, Flume, and Kafka are tools used in data science. All three are open source, distributed platforms designed to move data and operate on unstructured data. Each also supports big data in the scale of petabytes and exabytes, and all are written in Java. But there are some differences between these platforms. |
||
Fearless Refactoring, Not Reckless Refactoring Fearless refactoring is the agile concept that a developer should be able to incrementally change code without worrying about breaking it. But it's not believing that you don't need a safety net to detect and correct defects quickly when changes are made—that's just reckless. Here's how to avoid reckless refactoring. |