azure blob storage
72 TopicsSet Up Endpoint DLP Evidence Collection on your Azure Blob Storage
Endpoint Data Loss Prevention (Endpoint DLP) is part of the Microsoft Purview Data Loss Prevention (DLP) suite of features you can use to discover and protect sensitive items across Microsoft 365 services. Microsoft Endpoint DLP allows you to detect and protect sensitive content across onboarded Windows 10, Windows 11 and macOS devices. Learn more about all of Microsoft's DLP offerings. Before you start setting up the storage, you should review Get started with collecting files that match data loss prevention policies from devices | Microsoft Learn to understand the licensing, permissions, device onboarding and your requirements. Prerequisites Before you begin, ensure the following prerequisites are met: You have an active Azure subscription. You have the necessary permissions to create and configure resources in Azure. You have setup endpoint Data Loss Prevention policy on your devices Configure the Azure Blob Storage You can follow these steps to create an Azure Blob Storage using the Azure portal. For other methods refer to Create a storage account - Azure Storage | Microsoft Learn Sign in to the Azure Storage Accounts with your account credentials. Click on + Create On the Basics tab, provide the essential information for your storage account. After you complete the Basics tab, you can choose to further customize your new storage account, or you accept the default options and proceed. Learn more about azure storage account properties Once you have provided all the information click on the Networking tab. In network access, select Enable public access from all networks while creating the storage account. Click on Review + create to validate the settings. Once the validation passes, click on Create to create the storage Wait for deployment of the resource to be completed and then click on Go to resource. Once the newly created Blob Storage is opened, on the left panel click on Data Storage -> Containers Click on + Containers. Provide the name and other details and then click on Create Once your container is successfully created, click on it. Assign relevant permissions to the Azure Blob Storage Once the container is created, using Microsoft Entra authorization, you must configure two sets of permissions (role groups) on it: One for the administrators and investigators so they can view and manage evidence One for users who need to upload items to Azure from their devices Best practice is to enforce least privilege for all users, regardless of role. By enforcing least privilege, you ensure that user permissions are limited to only those permissions necessary for their role. We will use portal to create these custom roles. Learn more about custom roles in Azure RBAC Open the container and in the left panel click on Access Control (IAM) Click on the Roles tab. It will open a list of all available roles. Open context menu of Owner role using ellipsis button (…) and click on Clone. Now you can create a custom role. Click on Start from scratch. We have to create two new custom roles. Based on the role you are creating enter basic details like name and description and then click on JSON tab. JSON tab gives you the details of the custom role including the permissions added to that role. For owner role JSON looks like this: Now edit these permissions and replace them with permissions required based on the role: Investigator Role: Copy the permissions available at Permissions on Azure blob for administrators and investigators and paste it in the JSON section. User Role: Copy the permissions available at Permissions on Azure blob for usersand paste it in the JSON section. Once you have created these two new roles, we will assign these roles to relevant users. Click on Role Assignments tab, then on Add + and on Add role assignment. Search for the role and click on it. Then click on Members tab Click on + Select Members. Add the users or user groups you want to add for that role and click on Select Investigator role – Assign this role to users who are administrators and investigators so they can view and manage evidence User role – Assign this role to users who will be under the scope of the DLP policy and from whose devices items will be uploaded to the storage Once you have added the users click on Review+Assign to save the changes. Now we can add this storage to DLP policy. For more information on configuring the Azure Blob Storage access, refer to these articles: How to authorize access to blob data in the Azure portal Assign share-level permissions. Configure storage in your DLP policy Once you have configured the required permissions on the Azure Blob Storage, we will add the storage to DLP endpoint settings. Learn more about configuring DLP policy Open the storage you want to use. In left panel click on Data Storage -> Containers. Then select the container you want to add to DLP settings. Click on the Context Menu (… button) and then Container Properties. Copy the URL Open the Data Loss Prevention Settings. Click on Endpoint Settings and then on Setup evidence collection for file activities on devices. Select Customer Managed Storage option and then click on Add Storage Give the storage name and copy the container URL we copied. Then click on Save. Storage will be added to the list. Storage will be added to the list for use in the policy configuration. You can add up to 10 URLs Now open the DLP endpoint policy configuration for which you want to collect the evidence. Configure your policy using these settings: Make sure that Devices is selected in the location. In Incident reports, toggle Send an alert to admins when a rule match occurs to On. In Incident reports, select Collect original file as evidence for all selected file activities on Endpoint. Select the storage account you want to collect the evidence in for that rule using the dropdown menu. The dropdown menu shows the list of storages configured in the endpoint DLP settings. Select the activities for which you want to copy matched items to Azure storage Save the changes Please reach out to the support team if you face any issues. We hope this guide is helpful and we look forward to your feedback. Thank you, Microsoft Purview Data Loss Prevention Team453Views5likes0CommentsSeamless File Management in ASP.NET Core: Azure Blob Storage with Configurable Local Mode
Setting up the Project Before we dive into the implementation, let’s set up the basic structure: Create an ASP.NET Core Web API project. Add the Azure.Storage.BlobsNuGet Package for Azure Blob Storage integration. Update appsettings.json to include storage configurations: "Storage": { "Mode": "Local", "BlobStorage": { "ConnectionString": "YourAzureBlobStorageConnectionString", "ContainerName": "YourContainerName" } } Mode: Determines whether to use Local or Azure Blob Storage. ConnectionString and ContainerName: Azure Blob Storage details. Implementing the Storage Service Here is a custom StorageService that handles file uploads, deletions, and downloads. It dynamically decides the storage location based on the configuration mode. public class StorageService : IStorageService { private readonly IConfiguration _configuration; private readonly IHttpContextAccessor _httpContextAccessor; private readonly string _storageMode; public StorageService(IConfiguration configuration, IHttpContextAccessor httpContextAccessor) { _configuration = configuration; _httpContextAccessor = httpContextAccessor; _storageMode = _configuration["Storage:Mode"]; } public async Task<string> UploadFile(IFormFile file) { try { if (_storageMode == "Local") { var uploads = Path.Combine(Directory.GetCurrentDirectory(), "wwwroot", "uploads"); var fileName = $"{DateTime.Now:MMddss}-{file.FileName}"; var filePath = Path.Combine(uploads, fileName); if (!Directory.Exists(uploads)) Directory.CreateDirectory(uploads); using (var fileStream = new FileStream(filePath, FileMode.Create)) { await file.CopyToAsync(fileStream); } var host = string.Format("{0}{1}", _httpContextAccessor.HttpContext!.Request.IsHttps ? "https://" : "http://", _httpContextAccessor.HttpContext.Request.Host); return $"{host}/uploads/{fileName}"; } else { var container = new BlobContainerClient(_configuration["Storage:BlobStorage:ConnectionString"], _configuration["Storage:BlobStorage:ContainerName"]); await container.CreateIfNotExistsAsync(PublicAccessType.Blob); var blob = container.GetBlobClient(file.FileName); using (var fileStream = file.OpenReadStream()) { await blob.UploadAsync(fileStream, new BlobHttpHeaders { ContentType = file.ContentType }); } return blob.Uri.ToString(); } } catch (Exception ex) { throw; } } public async Task<bool> DeleteFile(string fileName) { try { if (_storageMode == "Local") { var uploads = Path.Combine(Directory.GetCurrentDirectory(), "wwwroot", "uploads"); var filePath = Path.Combine(uploads, fileName); if (File.Exists(filePath)) { File.Delete(filePath); return true; } } else { var container = new BlobContainerClient(_configuration["Storage:BlobStorage:ConnectionString"],_configuration["Storage:BlobStorage:ContainerName"]); var blob = container.GetBlobClient(fileName); await blob.DeleteIfExistsAsync(); return true; } } catch (Exception ex) { return false; } } public byte[] DownloadFile(string fileName) { var blobServiceClient = new BlobServiceClient(_configuration["Storage:BlobStorage:ConnectionString"]); var container = blobServiceClient.GetBlobContainerClient(_configuration["Storage:BlobStorage:ContainerName"]); var blobClient = container.GetBlobClient(fileName); using (var memoryStream = new MemoryStream()) { blobClient.DownloadTo(memoryStream); return memoryStream.ToArray(); } } } Static File Middleware Add app.UseStaticFiles(); in Program.cs to serve local files. Usage Example Here’s how to use the UploadFile method in a Web API: public async Task<ServiceResponse<List<ProductPicture>>> UploadImages(List<IFormFile> images) { ServiceResponse<List<ProductPicture>> response = new(); try { List<ProductPicture> pictures = new(); foreach (var file in images) { var imageUrl = await _storageService.UploadFile(file); pictures.Add(new ProductPicture { Url = imageUrl, FileName = Path.GetFileName(imageUrl) }); } response.Data = pictures; } catch (Exception ex) { response.Success = false; response.Message = ex.Message; } return response; } Conclusion By integrating Azure Blob Storage and providing a configurable local storage mode, this approach ensures flexibility for both development and production environments. It reduces development friction while leveraging Azure’s scalability when deployed. For further details, explore the Azure Blob Storage Documentation and Quickstart: Azure Blob Storage.125Views0likes0CommentsControl geo failover for ADLS and SFTP with unplanned failover.
We are excited to announce the General Availability of customer managed unplanned failover for Azure Data Lake Storage and storage accounts with SSH File Transfer Protocol (SFTP) enabled. What is Unplanned Failover? With customer managed unplanned failover, you are in control of initiating your failover. Unplanned failover allows you to switch your storage endpoints from the primary region to the secondary region. During an unplanned failover, write requests are redirected to the secondary region, which then becomes the new primary region. Because an unplanned failover is designed for scenarios where the primary region is experiencing an availability issue, unplanned failover happens without the primary region fully completing replication to the secondary region. As a result, during an unplanned failover there is a possibility of data loss. This loss depends on the amount of data that has yet to be replicated from the primary region to the secondary region. Each storage account has a ‘last sync time’ property, which indicates the last time a full synchronization between the primary and the secondary region was completed. Any data written between the last sync time and the current time may only be partially replicated to the secondary region, which is why unplanned failover may incur data loss. Unplanned failover is intended to be utilized during a true disaster where the primary region is unavailable. Therefore, once completed, the data in the original primary region is erased, the account is changed to locally redundant storage (LRS) and your applications can resume writing data to the storage account. If the previous primary region becomes available again, you can convert your account back to geo-redundant storage (GRS). Migrating your account from LRS to GRS will initiate a full data replication from the new primary region to the secondary which has geo-bandwidth costs. If your scenario involves failing over while the primary region is still available, consider planned failover. Planned failover can be utilized in scenarios including planned disaster recovery testing or recovering from non-storage related outages. Unlike unplanned failover, the storage service endpoints must be available in both the primary and secondary regions before a planned failover can be initiated. This is because planned failover is a 3-step process that includes: (1) making the current primary read only, (2) syncing all the data to the secondary (ensuring no data loss), and (3) swapping the primary and secondary regions so that writes are now in the new region. In contrast with unplanned failover, planned failover maintains the geo-redundancy of the account so planned failback does not require a full data copy. To learn more about planned failover and how it works view, Public Preview: Customer Managed Planned Failover for Azure Storage | Microsoft Community Hub To learn more about each failover option and the primary use case for each view, Azure storage disaster recovery planning and failover - Azure Storage | Microsoft Learn How to get started? Getting started is simple, to learn more about the step-by-step process to initiate an unplanned failover review the documentation: Initiate a storage account failover - Azure Storage | Microsoft Learn Feedback If you have questions or feedback, reach out at storagefailover@service.microsoft.com135Views0likes0CommentsStorage Account, grant SAS tokens but not Entra ID
Hi there, I was playing with Entra and storage account, and I do have permissions in my subscription to generate SAS tokens for sharing access. But when I'm trying to grant Entra ID accesses. that seems to be blocked: Just wondering how I could set the access tokens, but not granting access. What could be the missing role I'm not having?214Views0likes2CommentsAnnouncing the Next generation Azure Data Box Devices
Microsoft Azure Data Box offline data transfer solution allows you to send petabytes of data into Azure Storage in a quick, inexpensive, and reliable manner. The secure data transfer is accelerated by hardware transfer devices that enable offline data ingestion to Azure. Our customers use the Data Box family to move petabytes-scale data into Azure for backup, archival, data analytics, media and entertainment, training, and different workload migrations etc. We continue to get requests about moving truly massive amounts of data in a secure, simple and quick manner. We’ve heard you and to address your needs, we’ve designed a new, enhanced product to meet your data transfer needs. About the latest innovation in Azure Data Box Family Today, we’re excited to announce the preview of Azure Data Box 120 and Azure Data Box 525, our next-generation compact, NVMe-based Data Box devices. The new offerings reflect insights gained from working with our customers over the years and understanding their evolving data transfer needs. These new devices incorporate several improvements to accelerate offline data transfers to Azure, including: Fast copy - Built with NVMe drives for high-speed transfers and improved reliability and support for faster network connections Easy to use - larger capacity offering (525 TB) in a compact form-factor for easy handling Resilient - Ruggedized devices built to withstand rough conditions during transport Secure - Enhanced physical, hardware and software security features Broader availability – Presence in more Azure regions, meeting local compliance standards and regulations What’s new? Improved Speed & Efficiency NVMe devices offer faster data transfer rates, with copy speeds up to 7 GBps via SMB Direct on RDMA (100-GbE) for medium to large files, a 10x improvement in data transfer speeds as compared to previous generation devices. High-speed transfers to Azure with data upload up to 5x faster for medium to large files, reducing the lead time for your data to become accessible in the Azure cloud. Improved networking with support for up to 100 GbE connections, as compared to 10 GbE on the older generation of devices. Two options with usable capacity of 120 TB and 525 TB in a compact form factor meeting OSHA requirements. Devices ship the next day air in most regions. Learn more about the performance improvements on Data Box 120 and Data Box 525. Enhanced Security The next generation devices come with several new physical, hardware and software security enhancements. This is in addition to the built in Azure security baseline for Data Box and Data Box service security measures currently supported by the service. Secure boot functionality with hardware root of trust and Trusted Platform Module (TPM) 2.0. Custom tamper-proof screws and built-in intrusion detection system to detect unauthorized device access. AES 256-bit BitLocker software encryption for data at rest is currently available. Hardware encryption via the RAID controller, which will be enabled by default on these devices, is coming soon. Furthermore, once available, customers can enable double encryption through both software and hardware encryption to meet their sensitive data transfer requirements. These ISTA 6A compliant devices are built to withstand rough conditions during shipment while keeping both the device and your data safe and intact. Learn more about the enhanced security features on Data Box 120 and Data Box 525. Broader Azure region coverage Recurring request from our customers has been for wider availability of our higher-capacity device to ease large migrations. We’re happy to share Data Box 525 will be available across most Azure regions where the Data Box service is currently live. This marks a significant improvement in availability of a large-capacity device as compared to the current Data Box Heavy. What our customers have to say For the last several months, we’ve been working directly with our customers of all industries and sizes to leverage the next generation devices for their data migration needs. Customers love the larger capacity with form-factor familiarity, seamless set up and faster copy. “This new offering brings significant advantages, particularly by simplifying our internal processes. With deployments ranging from hundreds of terabytes to even petabytes, we previously relied on multiple regular Data Box devices—or occasionally Data Box Heavy devices—which required extensive operational effort. The new solution offers sizes better aligned with our needs, allowing us to achieve optimal results with fewer logistical steps. Additionally, the latest generation is faster and provides more connectivity options at data centre premises, enhancing both efficiency and flexibility for large-scale data transfers.” - Lukasz Konarzewski, Senior Data Architect, Commvault “We have been using the devices to move 1PB of archival media data to Azure blob storage using the Data Box transfer devices. The next generation devices provided a very smooth setup and copy experience, and we were able to transfer data in larger chunks and much faster than before. Overall, this has helped shorten our migration lead times and land the data in the cloud quickly and seamlessly.” - Daniel Perry, Kohler “We have had a positive experience overall with the new Data Box devices to move our data to Azure Blob storage. The devices offer easy plug and play installation, detailed documentation especially for the security features and good data copy performance. We would definitely consider using it again for future large data migration projects.” – Bas Boeijink, Cloud Engineer, Eurofiber Cloud Infra Sign up for the Preview The Preview is available in the US, Canada, EU, UK, and US Gov Azure regions, and we will continue to expand to more regions in the coming months. If you are interested in the preview, we want to hear from you. Customers can sign up here ISV partners can sign up here You can learn more about the all-new Data Box devices here. We are committed to continuing to deliver innovative solutions to lower the barrier for bringing data to Azure. Your feedback is important to us. Tell us what you think about the new Azure Data Box preview by writing to us at DataBoxPM@microsoft.com – we can’t wait to hear from you. Stop by and see us! Now that you’ve heard about the latest innovation in the product family, do come by and see the new devices at the Ignite session What’s new in Azure Storage: Supercharge your data centric workloads, on 21st November starting 11:00 AM CST. You can also drop by the Infra Hub to learn more from our product experts and sign up to try the new devices for your next migration!789Views7likes0CommentsDell APEX File Storage for Microsoft Azure brings a powerful new option to our customers
Dell PowerScale OneFS has been trusted by customers across all industries to provide performant, resilient, and scalable multiprotocol file storage for nearly two decades. At Ignite 2024 we announced a new Dell managed variant that compliments the existing offering to give you powerful choice from a proven industry leader.464Views0likes0CommentsOn-premises-first hybrid workflows in healthcare. Why start with digital pathology?
Traditionally, digital pathology solutions were always an on-premises only solutions. And they will always require on-premises components. That doesn't mean that they can't take advantage of cloud services. Read how one of our ISV solutions, Tiger Technology solves this challenge. This blog post describes one of the ways digital pathology can be implemented in the hybrid manner.199Views0likes0CommentsUpdate on classic storage account retirement and upcoming changes for classic storage customers
We previously announced that support would end for retired Azure classic storage accounts on 31 August 2024. On or after 1 November 2024 your ability to perform write operations using the classic service model APIs, including PUT and PATCH, will be limited. You will only be able to perform read and list operations using the classic service model APIs.809Views1like0Comments