本文介紹了無需本地下載即可從Azure Blob容器中讀取鑲木地板數據的處理方法,對大家解決問題具有一定的參考價值,需要的朋友們下面隨著小編來一起學習吧!
問題描述
我正在使用azure SDK、avro-parquet和Hadoop庫從Blob Container中讀取拼圖文件。目前,我正在將文件下載到臨時文件,然后創建一個ParquetReader。
try (InputStream input = blob.openInputStream()) {
Path tmp = Files.createTempFile("tempFile", ".parquet");
Files.copy(input, tmp, StandardCopyOption.REPLACE_EXISTING);
IOUtils.closeQuietly(input);
InputFile file = HadoopInputFile.fromPath(new org.apache.hadoop.fs.Path(tmp.toFile().getPath()),
new Configuration());
ParquetReader<GenericRecord> reader = AvroParquetReader.<GenericRecord> builder(file).build();
GenericRecord record;
while ((record = reader.read()) != null) {
recordList.add(record);
}
} catch (IOException | StorageException e) {
log.error(e.getMessage(), e);
}
我想使用來自azure BLOB項的inputStream讀取此文件,而無需將其下載到我的機器上。S3有這樣的方法(Read parquet data from AWS s3 bucket),但Azure是否存在這種可能性?
推薦答案
了解如何執行此操作。
StorageCredentials credentials = new StorageCredentialsAccountAndKey(accountName, accountKey);
CloudStorageAccount connection = new CloudStorageAccount(credentials, true);
CloudBlobClient blobClient = connection.createCloudBlobClient();
CloudBlobContainer container = blobClient.getContainerReference(containerName);
CloudBlob blob = container.getBlockBlobReference(fileName);
Configuration config = new Configuration();
config.set("fs.azure", "org.apache.hadoop.fs.azure.NativeAzureFileSystem");
config.set("fs.azure.sas.<containerName>.<accountName>.blob.core.windows.net", token);
URI uri = new URI("wasbs://<containerName>@<accountName>.blob.core.windows.net/" + blob.getName());
InputFile file = HadoopInputFile.fromPath(new org.apache.hadoop.fs.Path(uri),
config);
ParquetReader<GenericRecord> reader = AvroParquetReader.<GenericRecord> builder(file).build();
GenericRecord record;
while ((record = reader.read()) != null) {
System.out.println(record);
}
reader.close();
這篇關于無需本地下載即可從Azure Blob容器中讀取鑲木地板數據的文章就介紹到這了,希望我們推薦的答案對大家有所幫助,