Modern data-intensive applications, such as Analytical Database Management Systems or Machine Learning pipelines, are increasingly run as distributed systems in the public cloud or in enterprise datacenters. Distribution helps with scaling compute and storage resources but also introduces various data movement bottlenecks. Placing parts of the computation closer to the network can reduce these bottlenecks and allow data-intensive systems to scale better. SmartNICs, that is, network interface cards with compute capabilities, enable such close-to-network computation and are becoming common in the cloud. This talk is composed of three parts: First, we look at the driving forces behind distributed architectures that are now standard in the clouds and motivate why computation close to the network is necessary. Second, we cover the design spectrum of SmartNICs, explaining how their internal architecture can look like and what specific processing elements they can incorporate. In the third part of the talk, we sample from recent research projects that successfully leverage SmartNICs to make applications more efficient and more scalable.