fbpx
Connect with us

Tech

New supercomputer opens doors for researchers in Sweden

Published

on

New supercomputer opens doors for researchers in Sweden

Pat Brans

By

  • Pat Brans,
    Pat Brans Associates/Grenoble Ècole de Management

Published: 07 Jan 2022

At the time it was installed in the summer of 2018, Tetralith was more than just the fastest of the six traditional supercomputers in the National Supercomputer Centre (NSC) at Linköping University. It was the most powerful supercomputer in the Nordic region.

But just three years later, it was necessary to complement Tetralith with a new system – one that would be specifically designed to meet the requirements of fast-evolving artificial intelligence (AI) and machine learning (ML) algorithms. Tetralith wasn’t designed for machine learning – it didn’t have the parallel processing power that would be needed to handle the increasingly large datasets used to train artificial intelligence algorithms.

To support research programmes that rely on AI in Sweden, the Knut and Alice Wallenberg Foundation donated €29.5m to have the bigger supercomputer built. Berzelius was delivered in 2021 and began operation in the summer. The supercomputer, which has more than twice the computing power of Tetralith, takes its name from the renowned scientist Jacob Berzelius, who came from Östergötland, the region of Sweden where the NSC is located.

Atos delivered and installed Berzelius, which includes 60 of Nvidia’s latest and most powerful servers – the DGX systems, with eight graphics processing units (GPUs) in each. Nvidia networks connect the servers with one another – and with 1.5PB (petabytes) of storage hardware. Atos also delivered its Codex AI Suite, an application toolset to support researchers. The entire system is housed in 17 racks, which when placed side-by-side extend to about 10 metres.

The system will be used for AI research – not only the large programmes funded by the Knut and Alice Wallenberg Foundation, but also other academic users who apply for time on the system. Most of the users will be in Sweden, but some will be researchers in other parts of the world who cooperate with Swedish scientists. The biggest areas of Swedish research that will use the system in the near future are autonomous systems and data-driven life sciences. Both cases involve a lot of machine learning on enormous datasets.

NSC intends to hire staff to help users – not so much core programmers, but rather to help users put together parts that already exist. There are a lot of software libraries for AI and they have to be understood and used correctly. The researchers using the system typically either do their own programming, have it done by assistants, or simply adapt good open source projects to their needs.

“So far, around 50 projects have been granted time on the Berzelius,” says Niclas Andresson, technology manager of NSC. “The system is not yet fully utilised, but utilisation is rising. Some problems use a large part of the system. For instance, we had a hackathon on NLP [natural language processing], and that used the system quite well. Nvidia provided a toolbox for NLP that scales up to the big machine.”

In fact, one of the biggest challenges now is for researchers to scale the software they’ve been using to match the new computing power. Many of them have one or a small number of GPUs that they use on their desktop computers. But scaling their algorithms to a system with hundreds of GPUs is a challenge.

Now Swedish researchers have the opportunity to think big.

Autonomous systems

AI researchers in Sweden have been using supercomputer resources for several years. In the early days, they used systems based on CPUs. But in more recent years, as GPUs evolved out of the gaming industry and into supercomputing, their massively parallel structures have taken number crunching to a new level. The earlier GPUs were designed for image rendering, but now they are being tailored to other applications, such as machine learning, where they have already become essential tools for researchers.

“Without the availability of supercomputing resources for machine learning we couldn’t be successful in our experiments,” says Michael Felsberg, professor at the Computer Vision Laboratory at Linköping University. “Just having the supercomputer doesn’t solve our problems, but it’s an essential ingredient. Without the supercomputer, we couldn’t get anywhere. It would be like a chemist without a Petri dish, or a physicist without a clock.”

“Without the availability of supercomputing resources for machine learning we couldn’t be successful in our experiments. Just having the supercomputer doesn’t solve our problems, but without it, we couldn’t get anywhere”
Michael Felsberg, Linköping University

Felsberg was part of the group that helped define the requirements for Berzelius. He is also part of the allocation committee that decides which projects that get time on this cluster, how time is allocated, and how usage is counted.

He insists that not only is it necessary to have a big supercomputer, but it must be the right type of supercomputer. “We have enormous amounts of data – terabytes – and we need to process these thousands of times. In all the processing steps, we have a very coherent computational structure, which means we can use a single instruction and can process multiple data, and that is the typical scenario where GPUs are very strong,” says Felsberg.

“More important than the sheer number of calculations, it’s also necessary to look at the way the calculations are structured. Here too, modern GPUs do exactly what’s needed – they easily perform calculations of huge matrix products,” he says. “GPU-based systems were introduced in Sweden a few years ago, but in the beginning, they were relatively small, and it was difficult to gain access to them. Now we have what we need.”

Massive parallel processing and huge data transfers

“Our research does not require just a single run that lasts over a month. Instead, we might have as many as 100 runs, each lasting two days. During those two days, enormous memory bandwidth is used, and local filesystems are essential,” says Felsberg.

“When machine learning algorithms run on modern supercomputers with GPUs, a very high number of calculations are performed. But an enormous amount of data is also transferred. The bandwidth and throughput from the storage system to the computational node must be very high. Machine learning requires terabyte datasets and a given dataset needs to be read up to 1,000 times during one run, over a period of two days. So all the nodes and the memory have to be on the same bus.

“Modern GPUs have thousands of cores,” adds Felsberg. “They all run in parallel on different data but with the same instruction. So that is the single-instruction, multiple-data concept. That’s what we have on each chip. And then you have sets of chips on the same boards and you have sets of boards in the same machine so that you have enormous resources on the same bus. And that is what we need because we often split our machine learning onto multiple nodes.

“We use a large number of GPUs at the same time, and we share the data and the learning among all of these resources. This gives you a real speed-up. Just imagine if you ran this on a single chip – it would take over a month. But if you split it, a massively parallel architecture – let’s say, 128 chips – you get the result of the machine learning much, much faster, which means you can analyse the result and you see the outcome. Based on the outcome you run the next experiment,” he says.

“One other challenge is that the parameter spaces are so large that we cannot afford to cover the whole thing in our experiments. Instead, we have to do smarter search strategies in the parameter spaces and use heuristics to search what we need. This often requires that you know the outcome of the previous runs, which makes this like a chain of experiments rather than a set of experiments that you can run in parallel. Therefore, it’s very important that each run be as short as possible to squeeze out as many runs as possible, one after the other.”

“Now, with Berzelius in place, this is the first time in the 20 years I’ve been working on machine learning for computer vision that we really have sufficient resources in Sweden to do our experiments,” says Felsberg. “Before, the computer was always a bottleneck. Now, the bottleneck is somewhere else – a bug in the code, a flawed algorithm, or a problem with the dataset.”

The beginning of a new era in life sciences research

“We do research in structural biology,” says Bjorn Wallner, professor at Linköping University and head of the boinformatics division. “That involves trying to find out how the different elements that make up a molecule are arranged in three-dimensional space. Once you understand that, you can develop drugs to target specific molecules and bind to them.”

Most of the time, research is coupled to a disease, because that’s when you can solve an immediate problem. But sometimes the bioinformatics division at Linköping also conducts pure research to try to get a better understanding of biological structures and their mechanisms.

The group uses AI to help make predictions about specific protein structures. DeepMind, a Google-owned company, has done work that has given rise to a revolution in structural biology – and it relies on supercomputers.

DeepMind developed AlphaFold, an AI algorithm it trained using very large datasets from biological experiments. The supervised training resulted in “weights”, or a neural network that can then be used to make predictions. AlphaFold is now open source, available to research organisations, such as Bjorn Wallner’s team at Linköping University.

“We can use Berzelius to get a lot more throughput and break new ground in our research. Google has a lot of resources and can do a lot of big things, but now we can maybe compete a little bit”
Bjorn Wallner, Linköping University

There is still a vast amount of uncharted territory in structural biology. While AlphaFold offers a new way of finding the 3D structure of proteins, it’s only the tip of the iceberg – and digging deeper will also require supercomputing power. It’s one thing to understand a protein in isolation, or a protein in a static state. But it’s an entirely different thing to figure out how different proteins interact and what happens when they move.

Any given human cell contains around 20,000 proteins – and they interact. They are also flexible. Shifting one molecule out and another one binding a protein to something else are all actions that regulate the machinery of the cell. Proteins are also manufactured in cells. Understanding the basic machinery is important and can lead to breakthroughs.

“Now we can use Berzelius to get a lot more throughput and break new ground in our research,” says Wallner. “The new supercomputer even gives us the potential to retrain the AlphaFold algorithm. Google has a lot of resources and can do a lot of big things, but now we can maybe compete a little bit.

“We have just started using the new supercomputer and need to adapt our algorithms to this huge machine to use it optimally. We need to develop new methods, new software, new libraries, new training data, so we can actually use the machine optimally,” he says.

“Researchers will expand on what DeepMind has done and train new models to make predictions. We can move into protein interactions, beyond just single proteins and on to how proteins interact and how they change.”

Read more on IT innovation, research and development

Go to Source

Click to comment

Leave a Reply

Tech

Check out the shopping experience at Amazon’s new retail clothing store

Published

on

Check out the shopping experience at Amazon’s new retail clothing store

Amazon does very well with its online clothing sales, but physical clothes stores still sweep up most of the business.

Keen as ever for a piece of the pie, the e-commerce giant has unveiled plans for its first-ever retail clothing store for men and women, selling garments, shoes, and accessories from well-known brands as well as emerging designers.

The 30,000-square-foot brick-and-mortar store will open at The Americana at Brand — an upmarket shopping complex in Glendale, Los Angeles — later this year.

As you can see in the video below, Amazon Style stores will only have one sample of each item on the store floor. If you want to try something on, you simply use the Amazon Shopping app to scan its QR code, select the size and color, and it’ll be sent directly to the fitting room. Inside the fitting room you’ll find a large-screen tablet that lets you call for more colors or sizes.

You can also scan to buy and collect the item almost immediately from the pickup counter. Scanning items will also prompt Amazon’s algorithms to suggest similar items that you might like to try.

“Customers enjoy doing a mix of online and in-store shopping, and that’s no different in fashion,” Simoina Vasen, the managing director of Amazon Style, told CNN. “There’s so many great brands and designers, but discovering them isn’t always easy.”

Vasen also said Amazon Style will sell “everything from the $10 basic to the designer jeans to the $400 timeless piece” in a bid to meet “every budget and every price point.”

While Amazon made its name with online shopping, in recent years the company has explored the world of physical stores with a range of openings.

It started off with bookstores in 2015 before launching the first of many cashier-free Amazon Go stores that use cameras to track your purchases so you can simply grab and go without having to line up. But it didn’t stop there. Amazon Fresh grocery stores have been popping up in states across the country, while it also launched a store called Amazon 4-star selling products that have received high ratings on its online store.

It also acquired Whole Foods in a $13 billion deal 2017, and last year was reportedly looking into the idea of opening a chain of discount stores, though the pandemic apparently prompted the company to put the idea on hold.

Editors’ Recommendations







Go to Source

Continue Reading

Tech

How to enable TPM 2.0 on your PC

Published

on

How to enable TPM 2.0 on your PC

One of the controversial requirements to run Windows 11 is a TPM 2.0 chip. This chip, usually found on your PC’s motherboard, is a security chip that handles encryption for your fingerprint, other biometric data, and even things like Windows BitLocker. It’s usually turned on by default on most PCs, and found in most modern systems purchased in the last few years.

Yet if you’re not sure if TPM 2.0 is turned on (usually the Windows 11 updater will check for you), you can check for it manually and then enable it in a few steps. Here’s how.

Windows 10's Security Menu.


Arif Bacchus/Digital Trends

Check for TPM using the Windows Security App

Before diving into our guide, you might want to check for a TPM 2.0 chip on your PC. You can do this manually through the Windows 10 settings. This will let you know if you can continue with the Windows 11 install process.

Step 1: Open Windows 10 settings with Windows Key and I on your keyboard. Then go to Update and Security.

http://www.digitaltrends.com/

Step 2: From Update and Security click Windows Security followed by Device Security and Security Processor Details. If you don’t see a Security Processor section on this screen, your TPM 2.0 chip might be disabled or unavailable. If you see a spec that’s lower than 2.0, then your device can’t run Windows 11.

http://www.digitaltrends.com/
Windows 10's Security Options.


Arif Bacchus/Digital Trends

Get to BIOS to enable TPM

Once you verify or confirm that you have a TPM 2.0 chip on your system, then you’ll need to get into your PC’s BIOS to enable it. You can do this directly through Windows without the need for a keyboard combination on boot. Here’s how.

Step 1: Go into Windows 10 Settings. Head to Update and Security, followed by Recovery and then Restart Now. Your system will restart.

Windows 10's Advanced Security Options Menu.


Arif Bacchus/Digital Trends

Step 2: On the next screen, you’ll want to choose Troubleshoot, followed by Advanced Options and then UEFI Firmware Settings. Click on the Restart button, and this will boot your PC into the system BIOS to check on TPM 2.0.

http://www.digitaltrends.com/
Dell's BIOS settings.


Arif Bacchus/Digital Trends

Enable TPM 2.0 in the BIOS

Now that you’re in the System BIOS, you’ll want to look for a specific submenu. On most systems, the TPM settings can be found under settings labeled Advanced Security, Security, or Trusted Computing. Navigate to these menus using either the keyboard combinations listed on the screen or the mouse if your BIOS supports it.

If you’re unsure about which menu to get into, you can visit the links below. Each link will take you to a PC manufacturer’s page with guidance on how to enable TPM 2.0.

Step 1: Once you’re in the respective menu in the BIOS, you can check the box or flip the switch for one of the following options. Sometimes TPM 2.0 can be labeled differently as one of these options: Security Device, Security Device Support, TPM State, AMD fTPM switch, AMD PSP fTPM, Intel PTT, or Intel Platform Trust Technology.

http://www.digitaltrends.com/

Step 2: If you’re not sure if you’re checking the right box for TPM 2.0 settings, then you might want to check with the support documents for the company that made your PC. We linked to some of those above.

http://www.digitaltrends.com/

Step 3: Once you enable TPM 2.0, you can exit the BIOS using the commands listed at the bottom of the screen. Usually, the Esc key will do the trick, and you’ll be prompted to Save and Exit. Your system will then restart and boot you back into Windows.

http://www.digitaltrends.com/

Now that you confirmed that your PC has a TPM 2.0 chip, you can proceed with the Windows 11 installation process. We have a guide on how you can do that, and another piece that explains the differences between Windows 10 and Windows 11.

Editors’ Recommendations







Go to Source

Continue Reading

Tech

Samsung Galaxy Z Fold 4: What we want from the new foldable

Published

on

Samsung Galaxy Z Fold 4: What we want from the new foldable

The Samsung Galaxy Z Fold 3 has been my daily driver for a while now –and I love it. Unfolding it to get a bigger display still feels futuristic every time I do it. The cameras get the work done, and it is an amazing mobile device for productivity. But despite being the best of its kind, all things can use some improvement, and that’s the case for the Galaxy Z Fold 3 as well.

Here’s what I hope Samsung improves on with the Galaxy Z Fold 4.

A wider cover display … with a caveat

From left, the Galaxy Z Fold 3 and Oppo Find N open from the back.
Andy Boxall/Digital Trends

Front displays on foldables are meant to get things done quickly, without having to go to the next step of unfolding the phone. For instance, replying to that message on WhatsApp, checking the time, swiping through notifications, and anything that requires little effort. The Galaxy Z Fold 3 flies through quick tasks on the slim 6.2-inch display — unless I have to quickly type something on it.

Typing on the cover display of the Galaxy Z Fold 3 is a troublesome task. Due to the slimness of the screen, you don’t get the usable width on the keyboard, which results in a lot of typos that end up frustrating me. Making the cover display wider solves the problem of typos, but also leads to a wider foldable display.

Based on my experience with the Oppo Find N, it might not be a good idea despite the usability improvement. The web is built to operate vertically. You scroll down on stuff, be it your Twitter feed, TikTok, Instagram Reels, reading on a browser, or anything else. Personally, I’ve yet to come across an app or a webpage where I prefer a wider aspect ratio to a taller one. I like the taller aspect ratio of the Fold 3 rather than the wider aspect ratio on the Find N.

If Samsung could shrink the size of the left bezel on the cover display and increase its width, while keeping the dimensions the same as the Galaxy Z Fold 3, I’ll be glad. If not, I’ll just unfold the display to type quick replies as I have been doing.

Longer-lasting battery and faster charging

Typing on the closed Galaxy Z Fold 3.
Andy Boxall/Digital Trends

The Fold 3 battery life is above average, but not the best. If you push it to the limits or have a busy day without access to Wi-Fi, it’ll drain the battery before you get to bed. And unfortunately, the fast-charging support is limited to 25 watts.

With Chinese smartphone manufacturers raising the bar on fast charging to a mind-boggling 120W, I hope to see the Galaxy Z Fold 4 offer up to 45W fast charging at least. I’m fine with a 4,400mAh battery if I get support for fast charging that can get my phone from 10% to 60% within 35 minutes or so. Samsung has done it before with the Galaxy S20 Ultra, so there’s no reason it can’t bring 45W fast-charging support to the Galaxy Z Fold 4.

An upgrade to 11W fast wireless charging would also be much appreciated.

Make it lighter

Galaxy Z Fold 3 outer display showing ParkyPrakhar Twitter.

The first thing you realize when you start using the Galaxy Z Fold 3 is its weight. Depending on how you hold the phone, your pinkie finger could feel strain when using it folded for longer durations. That shouldn’t be the case with any foldable. It unfolds! Use it that way.

A reduction in the current 271-gram would be a welcome change and provide some relief to people’s pinkie fingers. I have had no major issues with the weight on my current Fold 3, but a lighter model would just feel better in the hands.

Better app optimization

An open Galaxy Z Fold 3 with apps on the screen.
An open Galaxy Z Fold 3 with apps on the screen. Andy Boxall/Digital Trends

This one has much more to do with Android app developers than Samsung, but apps could definitely use some optimization. And by app optimization, I don’t mean a full-screen Instagram (although, you can do that in the Samsung Lab in Settings).

Apps like WhatsApp, which is used by billions of people, need to step things up. On the Galaxy Z Fold 3, if I’m clicking a photo from the app, it magnifies everything. The viewfinder doesn’t give you an accurate estimate of what your photo is going to look like. Everything is blown up and magnified – even on video calls! If the user at the other end is holding the smartphone at the usual distance, you’ll see a cropped version on the folding display.

I hope WhatsApp can push out an update that fixes things, especially when its sister app — Instagram — has it all figured out in the Stories section. Instagram Stories don’t crop or magnify your image in the viewfinder.

There’s a decent chance the situation will start to improve with Android 12L, but a lot still rides on app developers implementing these changes.

Creaseless folding display

A Samsung Galaxy Z Fold 3 device with the display turned off lying on some leaves.

The crease on the Fold 3 folding display is much like the notch on Apple devices – you stop noticing it after a while. However, it is still noticeable when there’s a dark background, especially when reading something on the Kindle app, which is a common use case for me. Despite the crease bothering me sometimes, I love reading on the Fold 3.

On the other hand, the Oppo Find N‘s foldable display doesn’t have a deep crease like the Fold, though that might change after long-term use. But out of the box, the Find N has a much more seamless foldable display that looks and feels more pleasant to use. I just wish Samsung could figure out a way to minimize the crease to make my reading experience more pleasant.

Better UDC selfie shooter on the inside

Galaxy Z Fold 3 on a pavement.

When Samsung debuted the 4MP under-display camera (UDC) on the Fold 3, it was making a huge bet by adding an innovative new feature while also sacrificing usability. It’s beautiful to have a 7.6-inch display without any cutout bothering you and makes full-screen content appear more thrilling.

However, the quality of the selfies taken from the UDC isn’t great, as we noted in our review. Fortunately, Samsung is likely already working on a next-gen UDC with better image quality output, and I hope it debuts on the Galaxy Z Fold 4.

Built-in dock for S-Pen

Samsung introduced the capability of S-Pen support from its Note lineup to the Galaxy S21 Ultra and the Galaxy Z Fold 3. However, both of them missed out on a huge functional design feature that the Note had – a place to keep the S-Pen. If the Galaxy S22 Ultra renders are anything to go by, Samsung is already working on a place you can slot the S Pen without needing to shell out for a special case. That’ll make the S22 Ultra feel much more like the presumably defunct Note series, while the Z Fold 4 could get this slot, too, and serve as a more effective note-taking slate.

When will Galaxy Z Fold 4 launch?

The Galaxy Z Fold 4 has largely replaced the Galaxy Note lineup, which used to serve as the second flagship series lineup for Samsung. Now, the Galaxy Z Fold 4 is expected to launch alongside the Galaxy Z Flip 4 toward the end of 2022.

Editors’ Recommendations







Go to Source

Continue Reading
Home | Latest News | Tech | New supercomputer opens doors for researchers in Sweden
a

Market

Trending