Quantcast
Channel: Media Foundation Development for Windows Desktop forum
Viewing all 1079 articles
Browse latest View live

Asking Sound Stream renderer not to render the Audio data... how we can do that?

$
0
0

Hi ,

My requirement is not play the Audio data only video should get play.

If I dont pass the Audio data from Mine custom Audio MFT and pass only the Video data

from custom Video MFT, the call to Processoutput method in Video MFT get stopped.

How I can achieve the same scenario?

Best Regards,

Sharad


Delay playing back wav file using active x control in IE (Windows Media Player)

$
0
0

We are experiencing a problem where the active x control will randomly wait 30 seconds before actually playing a wav file from an http resource. In the web page we are able to talk to the active x control, and get information from it, but it does not start playing until about 30 seconds after we told it to play.

In order to get more information about what might be happening we would like to see some logging. Is there any way to enable some logging and see what is going on in the windows media player active x control?

My apologies if this is not the correct place to post this question.

H.264 SetOutPutType returns E_FAIL error

$
0
0

Hi guys,

I am writing a video processing application. I am utilizing a media session. My pipeline consists of source reader, decoder MFT (used by MfEnum, I query the MFTs capable of outputting one of my 4 desired video formats), Color Converter DSP (to convert YUY2, NV12, YV12 outputs of the decoder into RGB32), and my custom sink. My custom sink writes the samples of with no issues. I am planning on adding a variation to my pipeline. I want to inject an H264 encoder between the color converter and the sink. In this scenario my custom sink will be replaced by the built in MPEG4 sink.

Having said above, my original pipeline without the encoder and MPEG4 sink works fine.

I query the system for an H264 encoder with YUY2 input and H264 output and it returns one match. I create an instance and try to set up the input and output types. SetOutputType fails. MFTrace does not give me much except it fails.


I tried both using an already setup Media Type and creating the media type from scratch. Per the documentation I have filled up all of the pieces. I am setting the output first and then the input. Only thing is that I am doing this on Windows 8. I need to setup my VM to check on windows 7.


Here is the sample code.

                        

HRESULT MediaFoundationManager::FindEncoder(IMFTransform **decoder, IMFMediaType *type)
        {
            HRESULT hr = S_OK;
            UINT32 count = 0;

            CLSID *ppCLSIDs = NULL;

            MFT_REGISTER_TYPE_INFO info = { 0 };
            info.guidMajorType = MFMediaType_Video;
            info.guidSubtype = MFVideoFormat_YUY2;

            MFT_REGISTER_TYPE_INFO outInfo = {0};
            outInfo.guidMajorType = MFMediaType_Video;
            outInfo.guidSubtype = MFVideoFormat_H264;

            hr = MFTEnum(    MFT_CATEGORY_VIDEO_ENCODER,
                    0,          // Reserved
                    &info,       // Input type
                    &outInfo,      // Output type
                    NULL,       // Reserved
                    &ppCLSIDs,
                    &count
                    );

                if (SUCCEEDED(hr) && count == 0)
                    hr = MF_E_TOPO_CODEC_NOT_FOUND;


                if (SUCCEEDED(hr))
                    hr = CoCreateInstance(ppCLSIDs[0], NULL, CLSCTX_ALL, IID_PPV_ARGS(decoder));

                if (SUCCEEDED(hr))
                {                   
                    ConfigureMFTFromScratch(*decoder,info.guidSubtype,outInfo.guidSubtype, true);                   
                }
            }

            CoTaskMemFree(ppCLSIDs);
            return hr;
 }

void MediaFoundationManager::ConfigureMFTFromScratch(IMFTransform *transform, GUID inputFormat, GUID outputFormat, bool outputFirst)
{
            CComPtr<IMFMediaType> inputMediaType = NULL;
            CComPtr<IMFMediaType> outputMediaType = NULL;

            Helper::CheckHR(MFCreateMediaType(&inputMediaType),"Create Media Type");
            Helper::CheckHR(MFCreateMediaType(&outputMediaType),"Create Media Type");

            Helper::CheckHR(inputMediaType->SetGUID(MF_MT_MAJOR_TYPE, MFMediaType_Video),"Set major type");
            Helper::CheckHR(inputMediaType->SetGUID(MF_MT_SUBTYPE, inputFormat),"Set Sub type");
            Helper::CheckHR(MFSetAttributeSize(inputMediaType, MF_MT_FRAME_SIZE,_inputInfo.FrameWidth, _inputInfo.FrameHeight),"Set Frame Size");
            Helper::CheckHR(inputMediaType->SetUINT64(MF_MT_FRAME_RATE, _inputInfo.FrameRate),"Set Frame rate");
            Helper::CheckHR(inputMediaType->SetUINT32(MF_MT_AVG_BITRATE, _inputInfo.BitRate),"Set Bit rate");
            Helper::CheckHR(inputMediaType->SetUINT32(MF_MT_INTERLACE_MODE, MFVideoInterlaceMode::MFVideoInterlace_Progressive),"Set Interlace Mode");
            Helper::CheckHR(MFSetAttributeRatio(inputMediaType, MF_MT_PIXEL_ASPECT_RATIO, 1, 1),"Set Aspect Ratio");

            Helper::CheckHR(outputMediaType->SetGUID(MF_MT_MAJOR_TYPE, MFMediaType_Video),"Set major type");
            Helper::CheckHR(outputMediaType->SetGUID(MF_MT_SUBTYPE, outputFormat),"Set Sub type");
            Helper::CheckHR(MFSetAttributeSize(outputMediaType, MF_MT_FRAME_SIZE,_inputInfo.FrameWidth, _inputInfo.FrameHeight),"Set Frame Size");
            Helper::CheckHR(outputMediaType->SetUINT64(MF_MT_FRAME_RATE, _inputInfo.FrameRate),"Set Frame rate");
            Helper::CheckHR(outputMediaType->SetUINT32(MF_MT_AVG_BITRATE, _inputInfo.BitRate),"Set Bit rate");
            Helper::CheckHR(outputMediaType->SetUINT32(MF_MT_INTERLACE_MODE, MFVideoInterlaceMode::MFVideoInterlace_Progressive),"Set Interlace Mode");
            Helper::CheckHR(MFSetAttributeRatio(outputMediaType, MF_MT_PIXEL_ASPECT_RATIO, 1, 1),"Set Aspect Ratio");    
                
            if (outputFirst)
            {
                UINT32 level = -1;
                outputMediaType->SetUINT32(MF_MT_MPEG2_LEVEL, level);
                Helper::CheckHR(outputMediaType->SetUINT32(MF_MT_MPEG2_PROFILE , eAVEncH264VProfile_Main),"Set Profile Mode");
                Helper::CheckHR(transform->SetOutputType(0, outputMediaType, 0),"Set output type");    
                Helper::CheckHR(transform->SetInputType(0, inputMediaType, 0),"Set input type");                   
            }
            else
            {        
                Helper::CheckHR(transform->SetInputType(0, inputMediaType, 0),"Set input type");    
                Helper::CheckHR(transform->SetOutputType(0, outputMediaType, 0),"Set output type");                   
            }
}

HRESULT 80004005 is thrown at Helper::CheckHR(transform->SetOutputType(0, outputMediaType, 0),"Set output type");

Any ideas?





H264 Encoder setup issue

$
0
0

Hi guys,

I posted a question on another forum before finding this place. Here is the link.

http://social.msdn.microsoft.com/Forums/windowsdesktop/en-US/5951c5dc-a7e4-44f3-a6d8-862e0826f0e5/h264-setoutputtype-returns-efail-error?forum=windowsgeneraldevelopmentissues

Here is the text from the question:

I am writing a video processing application. I am utilizing a media session. My pipeline consists of source reader, decoder MFT (done by MfEnum, I query the MFTs capable of outputting one of my 4 desired video formats), Color Converter DSP (to convert YUY2, NV12, YV12 outputs of the decoder into RGB32), and my custom sink. My custom sink writes the samples of with no issues. I am planning on adding a variation to my pipeline. I want to inject an H264 encoder between the color converter and the sink. In this scenario my custom sink will be replaced by the built in MPEG4 sink.

Having said above, my original pipeline works fine without the H264 encoder and MPEG4 sink.

I query the system for an H264 encoder with YUY2 input and H264 output and it returns one match. I create an instance and try to set up the input and output types. SetOutputType fails. MFTrace does not give me much except it fails.

I tried both using an already setup Media Type and creating the media type from scratch. Per the documentation I have filled up all of the pieces. I am setting the output first and then the input. Only thing is that I am doing this on Windows 8. I need to setup my VM to check on windows 7.

Here is the sample code.

                        

HRESULT MediaFoundationManager::FindEncoder(IMFTransform **decoder, IMFMediaType *type)
        {
            HRESULT hr = S_OK;
            UINT32 count = 0;

            CLSID *ppCLSIDs = NULL;

            MFT_REGISTER_TYPE_INFO info = { 0 };
            info.guidMajorType = MFMediaType_Video;
            info.guidSubtype = MFVideoFormat_YUY2;

            MFT_REGISTER_TYPE_INFO outInfo = {0};
            outInfo.guidMajorType = MFMediaType_Video;
            outInfo.guidSubtype = MFVideoFormat_H264;

            hr = MFTEnum(    MFT_CATEGORY_VIDEO_ENCODER,
                    0,          // Reserved
                    &info,       // Input type
                    &outInfo,      // Output type
                    NULL,       // Reserved
                    &ppCLSIDs,
                    &count
                    );

                if (SUCCEEDED(hr) && count == 0)
                    hr = MF_E_TOPO_CODEC_NOT_FOUND;


                if (SUCCEEDED(hr))
                    hr = CoCreateInstance(ppCLSIDs[0], NULL, CLSCTX_ALL, IID_PPV_ARGS(decoder));

                if (SUCCEEDED(hr))
                {
                    ConfigureMFTFromScratch(*decoder,info.guidSubtype,outInfo.guidSubtype, true);                   
                }
            }

            CoTaskMemFree(ppCLSIDs);
            return hr;
 }

void MediaFoundationManager::ConfigureMFTFromScratch(IMFTransform *transform, GUID inputFormat, GUID outputFormat, bool outputFirst)
{
            CComPtr<IMFMediaType> inputMediaType = NULL;
            CComPtr<IMFMediaType> outputMediaType = NULL;

            Helper::CheckHR(MFCreateMediaType(&inputMediaType),"Create Media Type");
            Helper::CheckHR(MFCreateMediaType(&outputMediaType),"Create Media Type");

            Helper::CheckHR(inputMediaType->SetGUID(MF_MT_MAJOR_TYPE, MFMediaType_Video),"Set major type");
            Helper::CheckHR(inputMediaType->SetGUID(MF_MT_SUBTYPE, inputFormat),"Set Sub type");
            Helper::CheckHR(MFSetAttributeSize(inputMediaType, MF_MT_FRAME_SIZE,_inputInfo.FrameWidth, _inputInfo.FrameHeight),"Set Frame Size");
            Helper::CheckHR(inputMediaType->SetUINT64(MF_MT_FRAME_RATE, _inputInfo.FrameRate),"Set Frame rate");
            Helper::CheckHR(inputMediaType->SetUINT32(MF_MT_AVG_BITRATE, _inputInfo.BitRate),"Set Bit rate");
            Helper::CheckHR(inputMediaType->SetUINT32(MF_MT_INTERLACE_MODE, MFVideoInterlaceMode::MFVideoInterlace_Progressive),"Set Interlace Mode");
            Helper::CheckHR(MFSetAttributeRatio(inputMediaType, MF_MT_PIXEL_ASPECT_RATIO, 1, 1),"Set Aspect Ratio");

            Helper::CheckHR(outputMediaType->SetGUID(MF_MT_MAJOR_TYPE, MFMediaType_Video),"Set major type");
            Helper::CheckHR(outputMediaType->SetGUID(MF_MT_SUBTYPE, outputFormat),"Set Sub type");
            Helper::CheckHR(MFSetAttributeSize(outputMediaType, MF_MT_FRAME_SIZE,_inputInfo.FrameWidth, _inputInfo.FrameHeight),"Set Frame Size");
            Helper::CheckHR(outputMediaType->SetUINT64(MF_MT_FRAME_RATE, _inputInfo.FrameRate),"Set Frame rate");
            Helper::CheckHR(outputMediaType->SetUINT32(MF_MT_AVG_BITRATE, _inputInfo.BitRate),"Set Bit rate");
            Helper::CheckHR(outputMediaType->SetUINT32(MF_MT_INTERLACE_MODE, MFVideoInterlaceMode::MFVideoInterlace_Progressive),"Set Interlace Mode");
            Helper::CheckHR(MFSetAttributeRatio(outputMediaType, MF_MT_PIXEL_ASPECT_RATIO, 1, 1),"Set Aspect Ratio");    
                
            if (outputFirst)
            {
                UINT32 level = -1;
                outputMediaType->SetUINT32(MF_MT_MPEG2_LEVEL, level);
                Helper::CheckHR(outputMediaType->SetUINT32(MF_MT_MPEG2_PROFILE , eAVEncH264VProfile_Main),"Set Profile Mode");
                Helper::CheckHR(transform->SetOutputType(0, outputMediaType, 0),"Set output type");    
                Helper::CheckHR(transform->SetInputType(0, inputMediaType, 0),"Set input type");                   
            }
            else
            {        
                Helper::CheckHR(transform->SetInputType(0, inputMediaType, 0),"Set input type");    
                Helper::CheckHR(transform->SetOutputType(0, outputMediaType, 0),"Set output type");                   
            }
}

HRESULT 80004005 is thrown at Helper::CheckHR(transform->SetOutputType(0, outputMediaType, 0),"Set output type");

FYI _inputInfo is a bit bucket that holds the information retrieved from the source reader.

It holds frame size, frame rate, bit rate

Any help is much appreciated.

Any ideas?



IMFSinkWriter: WriteFrame returns E_CHANGED_STATE when encoding H.264

$
0
0

(Note: I originally posted this question in the Windows Apps with C++ forum about 3 weeks ago, but received no replies. I realized quickly that this might be a better forum for this, but wanted to let that question run its course before posting again here. So any help you all could provide would be VERY much appreciated.)

I am using a C++/CX component to encode video using IMFSinkWriter, and everything works fine when I encode using the WMV3 codec, but when I try to use the H.264 codec I get E_CHANGED_STATE (0x8000000c: "A concurrent or interleaved operation changed the state of the object, invalidating this operation.") errors. I can't find any reason to believe this is a threading issue, which was my first thought: the component is being called from an async/await C# method, but everything awaitable is being awaited, as far as I can see, and there's no other threading-like behavior going on in the app.

My second thought was that somehow the sink writer's throttling (on by default) was being turned off. But this doesn't seem to be the case. In fact, I tried to explicitly enable throttling, but this didn't have any effect:

spAttr->SetUINT32(MF_SINK_WRITER_DISABLE_THROTTLING, false);

The encoding source is an RGB32 stream, and I am encoding at a 1.5Mbps bitrate, 25 FPS.

The error happens every time I try to encode a video of any real-world length: it'll usually pop up before it gets about 10% of the way through a 5 minute output file. However, it isn't entirely deterministic: sometimes it will get partway through the third (of 58) segments, while other times it will happened on the first.

None of these problems affect WMV encoding. Can anyone offer any suggestions for things to try?

Additional note: I have seen some references that I might need to supply some metadata (a sample description box) with the video, but I can't find any documentation on this that I can understand. As indicated I can encode WMV without getting this error (usually-- I think I've seen it once or twice in the long history of testing this component), and all I would like to do is provide an alternative format for my users. Could the lack of this metadata be causing this, and if so are there any accessible tutorials on constructing it?

damaged PSD photoshop CS3 file?

$
0
0
Hey!

While working on a tileset for 2KH, Vista display drivers started behaving strange (black screen for a second, then screen comes back with a recovery message )

I successfully saved my work, right before a complete system crash, but once I restarted Vista, and tried opening my work to continue, Photoshop gave me a warning about some data being corrupted and opened my file as a black, one layer document  I tried several times with no success so far

Usually, i'm quite good at versioning and archiving tilesets, but ironicly, I've been working close to 2 months on that single file, without archiving it ( it's a 100+ tiles on a single PSD ) so i'm really depressed about loosing all that work 

I tried several PSD repairs softwares available on the web with no success : I was wondering if any of you guys knew a way to save a PSD with corrputed data ?

MF_MT_AVG_BITRATE can't change a bitrate with IMFSinkWriter + H264 on Windows7

$
0
0

I'm trying to investigate 'MFCaptureToFile' sample on Windows7 & VisualStudio2008. It's for understanding to use IMFSinkWriter. My purpose is encode movie to H264 by IMFSinkWriter.


And I changed TARGET_BIT_RATE from (240*1000) to (10*1024*1024). It is passed to IMFMediaType through MF_MT_AVG_BITRATE.


As the result, I can change the bitrate of WMV by above changing. But H264 keeps a same bitrate(around 210kbps).


I tried to run same program on Windows8, it works well. I'm not sure why it isn't working on Windows7. Do we have a method to change H264's bitrate on Windows7?

And I've submitted this question on wrong page of MF_MT_AVG_BITRATE.  How can I delete it? I can't delete it from "Edit Post"...


Error restarting Life cam VX-3000

$
0
0

Hi,

I have written an application to capture from UVC compliant cameras using Media foundation's SourceReader.

A part of the application closes down and releases the camera and then restarts it. Below are the code snippets desribing it:

Initialize Source reader:

hr = MFCreateSourceReaderFromMediaSource(_mSource,
					 _mAttributes,&_sourceReader);


Get a frame synchronously:

    hr = _sourceReader->ReadSample(MF_SOURCE_READER_FIRST_VIDEO_STREAM,
		0, // Not using MF_SOURCE_READER_CONTROLF_DRAIN flag because we are not going to Buffer samples&actStreamIndex,&actStreamFlags,&timeStamp,&mediaSample);

We now clear & release the camera:

    SafeRelease(&_sourceReader);
    SafeRelease(&_mSource);
    SafeRelease(&_mAttributes);
    SafeRelease(&_nativeMediaType);

    for (DWORD i = 0; i < _count; i++)
    {
        SafeRelease(&_mmDevices[i]);
    }

    CoTaskMemFree(_mmDevices);

I then re-initialize the camera with a different media type and  call ReadSample again.

This work flow works with most cameras, but, with lifecam VX-3000, the ReadSample call after re-initializing returns the following failed HRESULT : 0x80070004

HRESULT_FROM_WIN32(ERROR_TOO_MANY_OPEN_FILES): The system cannot open file.

Am I missing anything while releasing the device? Thanks in advance for any ideas/suggestions on this.

Anchit


Solve issue - unresolved external symbol MFCreateDXGIDeviceManager

$
0
0

I want to capture video using Media Foundation transform library.

I have used

HRESULT hr = S_OK;
D3D_FEATURE_LEVEL FeatureLevel;
ID3D11DeviceContext* pDX11DeviceContext;

hr = CreateDX11Device(&g_pDX11Device, &pDX11DeviceContext, &FeatureLevel);


if (SUCCEEDED(hr))
{
    hr = MFCreateDXGIDeviceManager(&g_ResetToken, &g_pDXGIMan);
}

On building the vcpp application I received error:

unresolved external symbol MFCreateDXGIDeviceManagerv

For this, I used

#pragma comment(lib, "mf") // For MFEnumDevices
#pragma comment(lib, "mfplat")
#pragma comment(lib, "mfreadwrite")
#pragma comment(lib, "dxva2")
#pragma comment(lib, "d3d11")
#pragma comment(lib, "mfuuid")

to include libraries related to MFCreateDXGIDeviceManagerv. Still I am getting same error. Please suggest the solution.



Spec for AsfLeakyBucketPairs

$
0
0

I am trying to determine how AsfLeakyBucketPairs data in an ASF file is generated.  We have a legacy device that will not play an ASF stream unless this field is present.  However, we can populate the field with garbage and the device will play the stream properly.  Would like to generate the correct leaky bucket data.  I looked at the leaky bucket field in the VC1 sequence header but it only has 1 pair and the value does not correlate to any of the pairs in the data.  Is there a spec that scribes how to generate the AsfLeakyBucketPairs data for the Asf file?

"ASFLeakyBucketPairs"0Binary0

0000: 00 00 C0 5D 00 00 00 93-C0 00 30 75 00 00 BC 03      ]      0u    
0010: 98 00 C8 AF 00 00 63 EF-61 00 90 E2 00 00 4C B1         c a     L
0020: 49 00 00 C2 01 00 B9 06-20 00 80 A9 03 00 24 E0   I             $
0030: 0A 00 30 57 05 00 6B F6-04 00 20 A1 07 00 F4 94     0W  k        
0040: 01 00 90 23 0B 00 C0 04-00 00 40 42 0F 00 9A 02      #      @B    
0050: 00 00 C0 5C 15 00 AD 01-00 00 20 0B 20 00 E7 00      \            
0060: 00 00 40 4B 4C 00 2A 00-00 00 80 96 98 00 12 00     @KL *        
0070: 00 00                                               

Trouble setting the H.264 Encoder's Max Key Frame Spacing.

$
0
0

Hi

I am currently trying to change the max key frame interval of the media foundation H.264 encoder by setting the MF_MT_MAX_KEYFRAME_SPACING on the output media type of the encoder. But every video that I create seems to always default to a key frame interval of 2 seconds.

My current scenario entails setting up a SinkWriter with an uncompressed YUY2 input media type and a H.264 output media type.

The video source is 25fps so for a 1 second key frame interval I set the MF_MT_MAX_KEYFRAME_SPACING attribute on the H.264 output media type to 25 frames. But the H.264 encoder still outputs the key frames every 2 seconds(50 frames).

I also tried setting it to a higher 250 frames(10 second) interval and with the same result.

Am I missing some setting somewhere or is the max key frame interval not configurable on the media foundation H.264 encoder?

I have included two trace statements from my tests with the SinkWriter below:

1. The Media Foundation H.264 encoder being created.
7784,1CD4 12:44:33.39787 COle32ExportDetours::CoCreateInstance @ Created {6CA50344-051A-4DED-9779-A43305165E35}  (C:\Windows\SysWOW64\mfh264enc.dll) @00A5B51C - traced interfaces: IMFTransform @00A5B51C,

2. The output media format set on the encoder with the MF_MT_MAX_KEYFRAME_SPACING=25 attribute.
7784,1CD4 12:44:33.41075 CMFTransformDetours::SetOutputType @00A5B51C Succeeded MT: MF_MT_FRAME_SIZE=3092376453696 (720,576);MF_MT_AVG_BITRATE=3500000;MF_MT_MPEG_SEQUENCE_HEADER=00 00 00 01 67 42 c0 1e 95 b0 2d 04 9b 01 10 00 00 03 00 10 00 00 03 03 21 da 08 84 6e 00 00 00 01 68 ca 8f 20 ;MF_MT_MAJOR_TYPE=MEDIATYPE_Video;MF_MT_MPEG2_PROFILE=66;MF_MT_MAX_KEYFRAME_SPACING=25;MF_MT_FRAME_RATE=107374182401 (25,1);MF_MT_PIXEL_ASPECT_RATIO=4294967297 (1,1);MF_MT_INTERLACE_MODE=7;MF_MT_SUBTYPE=MEDIASUBTYPE_H264

MFCopy: One more alarming bug in Windows 8/8.1

$
0
0

One more Windows 8/8.1  bug. This is alarming. Basic functionality broken.

Basically, MFCopy "trim" does not work in Windows 8/8.1. Works fine on Windows 7.

Steps to reproduce:

MFCopy  -s 20000 -d 60000 input.mp4 out.mp4

Please see the output files (out_win8.mp4, out_win7.mp4) here:

https://drive.google.com/file/d/0Bxyb9Iftjh4DX1FsVENJWWt4Q3M/view?usp=sharing

The duration of out_win8.mp4 is 1:19 (incorrect) whereas the duration of out_win7 is 1:00 (correct)

MS developers, Please check this.

How to write chunks from the recorded video to publishpoint using ssfsdk api?

$
0
0

Hello,

I am using ssfsdk api for creating ismv from .wmv file. I want to write chunks to publish point of remote server. I am using winHttp 

DWORD dwBytesWritten = 0;
	BOOL  bResults = FALSE;
	HINTERNET hSession = NULL,
		hConnect = NULL,
		hRequest = NULL;

	// Use WinHttpOpen to obtain a session handle.
	hSession = WinHttpOpen(L"A WinHTTP Example Program/1.0",
		WINHTTP_ACCESS_TYPE_DEFAULT_PROXY,
		WINHTTP_NO_PROXY_NAME,
		WINHTTP_NO_PROXY_BYPASS, 0);

	// Specify an HTTP server.
	if (hSession)
		hConnect = WinHttpConnect(hSession, L"http://10.10.10.29/Test/Test.isml/Stream(video)",
		INTERNET_DEFAULT_HTTP_PORT, 0);

	if (hConnect) {
		LPCTSTR types[] = { _T("multipart/form-data; boundary=---------------------------ThIs_Is_tHe_bouNdaRY_$"), L"image/jpeg" };
		hRequest = WinHttpOpenRequest(hConnect,
			L"POST", L"http://10.10.10.29/Test/Test.isml/Stream(video)",
			NULL, WINHTTP_NO_REFERER,
			types, 0);
	}

	if (hRequest) {

		bResults = WinHttpAddRequestHeaders(hRequest,
			L"Host: 10.10.10.29",
			(ULONG)-1L,
			WINHTTP_ADDREQ_FLAG_ADD);

		bResults = WinHttpAddRequestHeaders(hRequest,
			L"Connection: keep-alive",
			(ULONG)-1L,
			WINHTTP_ADDREQ_FLAG_ADD);

	}

	if (bResults) {
		bResults = WinHttpSendRequest(hRequest,
			WINHTTP_NO_ADDITIONAL_HEADERS,
			0, WINHTTP_NO_REQUEST_DATA,
			0, (DWORD)strlen(pszData), 0);
	}
	else
	{
		printf("Error %d open.\n", GetLastError());
	}

	if (bResults) {
		bResults = WinHttpWriteData(hRequest, pszData,
			(DWORD)strlen(pszData),
			&dwBytesWritten);
	}
	else
	{
		printf("Error %d send.\n", GetLastError());
	}

	if (bResults) {
		bResults = WinHttpReceiveResponse(hRequest, NULL);
	}
	else
	{
		printf("Error %d write.\n", GetLastError());
	}

	if (bResults)
	{
		do
		{
			dwSize = 0;
			if (!WinHttpQueryDataAvailable(hRequest, &dwSize))
			{
				printf("Error %u in WinHttpQueryDataAvailable.\n",
					GetLastError());
				break;
			}

			// No more available data.
			if (!dwSize)
				break;

			// Allocate space for the buffer.
			pszOutBuffer = new char[dwSize + 1];
			if (!pszOutBuffer)
			{
				printf("Out of memory\n");
				break;
			}

			// Read the Data.
			ZeroMemory(pszOutBuffer, dwSize + 1);

			if (!WinHttpReadData(hRequest, (LPVOID)pszOutBuffer,
				dwSize, &dwDownloaded))
			{
				printf("Error %u in WinHttpReadData.\n", GetLastError());
			}
			else
			{
				printf("%s", pszOutBuffer);
			}

			delete[] pszOutBuffer;

			if (!dwDownloaded)
				break;

		} while (dwSize > 0);
	}
	else
	{
		printf("Error %d resp.\n", GetLastError());
	}


	if (hRequest) WinHttpCloseHandle(hRequest);
	if (hConnect) WinHttpCloseHandle(hConnect);
	if (hSession) WinHttpCloseHandle(hSession);

But WinHttp is unable to connect to the server so that chuncks can be written as ismv file via publishpoint.

Please suggest regarding the same.

Obtaining display aspect ratio from AVI files

$
0
0

Hi, I am reading SD PAL sized avi files that are 4:3 and 16:9 display aspect ratio but am not able to read this from the file.

When I call MFGetAttributeRatio() with GUID MF_MT_PIXEL_ASPECT_RATIO this returns 1:1. The MFVideoFormat also contains 1:1 and the aspect ratio in VideoInfo2 is 5:4 for both 4:3 and 16:9. I understand that the aspect ratio in VideoInfo2 is the storage aspect ratio and that I need the pixel aspect ratio and by multiplying this with the storage aspect ratio this then gives the display aspect ratio but I have not been successful in reading this value. Also the interlaced field is always progressive which is incorrect, it should be interlaced.

Analysing the output from MFTrace when attached to windows media player on Windows 7 I can see that the MF_MT_PIXEL_ASPECT_RATIO value on calls to setInputType and setOutputType will change from 1,1 to 16,15 and then 64,45, the latter when multiplied with 5:4 gives 16:9.

I have started using media foundation and have read the article: http://msdn.microsoft.com/en-us/library/bb530115(v=VS.85).aspx

but this does not explain how I obtain the source display aspect ratio I am after.

Any suggestions and help much appreciated as I think I am not the only person who is struggling with this. To give a little more information I am using Media Foundation to import AVI files into our editor software and I can read all other video information such as size and frame rate but need to know the interlace mode and display aspect ratio before reading the samples into our application.

How to encode video from the captured video to ismv by providing publishpoint using ssfsdk api and intel sdk ?

$
0
0

Hello,

I am using Media Foundation transform tool for Capturing video from camera. 

HRESULT ConfigureVideoEncoding(IMFCaptureSource *pSource, IMFCaptureRecordSink *pRecord, REFGUID guidEncodingType)
{
    IMFMediaType *pMediaType = NULL;
    IMFMediaType *pMediaType2 = NULL;
    GUID guidSubType = GUID_NULL;

    // Configure the video format for the recording sink.
    HRESULT hr = pSource->GetCurrentDeviceMediaType((DWORD)MF_CAPTURE_ENGINE_PREFERRED_SOURCE_STREAM_FOR_VIDEO_RECORD , &pMediaType);
    if (FAILED(hr))
    {
        goto done;
    }

    hr = CloneVideoMediaType(pMediaType, guidEncodingType, &pMediaType2);
    if (FAILED(hr))
    {
        goto done;
    }


    hr = pMediaType->GetGUID(MF_MT_SUBTYPE, &guidSubType);
    if(FAILED(hr))
    {
        goto done;
    }

    if(guidSubType == MFVideoFormat_H264_ES || guidSubType == MFVideoFormat_H264 || guidSubType == MFVideoFormat_YUY2)
    {
        //When the webcam supports H264_ES or H264, we just bypass the stream. The output from Capture engine shall be the same as
		//the native type supported by the webcam
		hr = pMediaType2->SetGUID(MF_MT_SUBTYPE, MFVideoFormat_YUY2);
    }
    else
    {
        UINT32 uiEncodingBitrate;
        hr = GetEncodingBitrate(pMediaType2, &uiEncodingBitrate);
        if (FAILED(hr))
        {
            goto done;
        }

        hr = pMediaType2->SetUINT32(MF_MT_AVG_BITRATE, uiEncodingBitrate);
    }

    if (FAILED(hr))
    {
        goto done;
    }

    // Connect the video stream to the recording sink.
    DWORD dwSinkStreamIndex;
    hr = pRecord->AddStream((DWORD)MF_CAPTURE_ENGINE_PREFERRED_SOURCE_STREAM_FOR_VIDEO_RECORD, pMediaType2, NULL, &dwSinkStreamIndex);

done:
    SafeRelease(&pMediaType);
    SafeRelease(&pMediaType2);
    return hr;
}

HRESULT ConfigureAudioEncoding(IMFCaptureSource *pSource, IMFCaptureRecordSink *pRecord, REFGUID guidEncodingType)
{
    IMFCollection *pAvailableTypes = NULL;
    IMFMediaType *pMediaType = NULL;
    IMFAttributes *pAttributes = NULL;

    // Configure the audio format for the recording sink.

    HRESULT hr = MFCreateAttributes(&pAttributes, 1);
    if(FAILED(hr))
    {
        goto done;
    }

    // Enumerate low latency media types
    hr = pAttributes->SetUINT32(MF_LOW_LATENCY, TRUE);
    if(FAILED(hr))
    {
        goto done;
    }


    // Get a list of encoded output formats that are supported by the encoder.
    hr = MFTranscodeGetAudioOutputAvailableTypes(guidEncodingType, MFT_ENUM_FLAG_ALL | MFT_ENUM_FLAG_SORTANDFILTER,
        pAttributes, &pAvailableTypes);
    if (FAILED(hr))
    {
        goto done;
    }

    // Pick the first format from the list.
    hr = GetCollectionObject(pAvailableTypes, 0, &pMediaType);
    if (FAILED(hr))
    {
        goto done;
    }

    // Connect the audio stream to the recording sink.
    DWORD dwSinkStreamIndex;
    hr = pRecord->AddStream((DWORD)MF_CAPTURE_ENGINE_PREFERRED_SOURCE_STREAM_FOR_AUDIO, pMediaType, NULL, &dwSinkStreamIndex);
    if(hr == MF_E_INVALIDSTREAMNUMBER)
    {
        //If an audio device is not present, allow video only recording
        hr = S_OK;
    }
done:
    SafeRelease(&pAvailableTypes);
    SafeRelease(&pMediaType);
    SafeRelease(&pAttributes);
    return hr;
}
 

I want to create a DXD11 pipeline for getting streams from the recorded video to send it to ssfsdk api method ( Smooth Streaming Format SDK)

	hr = SSFMuxAddStream(hSSFMux, &streamInfo, &pCtx->dwStreamIndex);

And later using

hr = SSFMuxProcessOutput( pCtx->hSSFMux, pCtx->dwStreamIndex, &outputBuffer ); 


we will get chunks (streams) in output buffer.

I wanted to write these streams to publishpoint ( http://<server>/<pubpoint>/Streams(<identifier>) ) for live streaming.

I have found below code for this.

  fResult = WinHttpSendRequest(
                    hHttpRequest,
                    szAdditionalHeaders,
                    ARRAYSIZE(szAdditionalHeaders)-1,
                    WINHTTP_NO_REQUEST_DATA,
                    0,
                    0,
                    NULL
                    );
    if( !fResult )
    {
        hr = HRESULT_FROM_WIN32( GetLastError() );
        goto done;
    }


And here’s an example of a function for posting HTTP chunks (assuming WinHTTP is operating in “synchronous” mode):


     HRESULT WriteToSmoothStreamOutput(
        __in HINTERNET hHttpRequest,
        __in_ecount(cbData) LPCVOID pbData,
        __in ULONG cbData )
    {
        HRESULT hr = S_OK;
        BOOL fResult = FALSE;
        DWORD cbWritten;

        char szHttpChunkHeaderA[32];
        char szHttpChunkFooterA[] = "\r\n";

        //
        // Send the HTTP Chunk Transfer Encoding chunk-start mark
        // Observe the use of UTF-8 strings.
        //

        hr = StringCchPrintfA(
                szHttpChunkHeaderA,
                ARRAYSIZE(szHttpChunkHeaderA),
                "%X\r\n",
                cbData );
        if( FAILED(hr) )
        {
            goto done;
        }

        fResult = WinHttpWriteData(
                        hHttpRequest,
                        szHttpChunkHeaderA,
                        (DWORD)( strlen(szHttpChunkHeaderA) * sizeof(char) ),
                        &cbWritten
                        );
        if( !fResult )
        {
            hr = HRESULT_FROM_WIN32( GetLastError() );
            goto done;
        }

        //
        // Send the actual chunk data
        //

        if( cbData > 0 )
        {
            fResult = WinHttpWriteData(
                            hHttpRequest,
                            pbData,
                            cbData,&cbWritten
                            );
            if( FAILED(hr) )
            {
                hr = HRESULT_FROM_WIN32( GetLastError() );
                goto done;
            }
        }

        //
        // Send the HTTP Chunk Transfer Encoding chunk-end mark
        //

        fResult = WinHttpWriteData(
                        hHttpRequest,
                        szHttpChunkFooterA,
                        (DWORD)( strlen(szHttpChunkFooterA) * sizeof(char) ),
               &cbWritten
                        );
        if( FAILED(hr) )
        {
            hr = HRESULT_FROM_WIN32( GetLastError() );
            goto done;
        }

    done:
         return( hr );
    }  

But WinHttp is unable to connect and write streams to publishpoint for live streaming.

Please suggest for the above requirement.


How to encode video from the captured video to ismv by providing publishpoint using ssfsdk api and intel sdk ?

$
0
0
Hello,

I am using Media Foundation transform tool for Capturing video from camera. 
HRESULT ConfigureVideoEncoding(IMFCaptureSource *pSource, IMFCaptureRecordSink *pRecord, REFGUID guidEncodingType)
{
    IMFMediaType *pMediaType = NULL;
    IMFMediaType *pMediaType2 = NULL;
    GUID guidSubType = GUID_NULL;

    // Configure the video format for the recording sink.
    HRESULT hr = pSource->GetCurrentDeviceMediaType((DWORD)MF_CAPTURE_ENGINE_PREFERRED_SOURCE_STREAM_FOR_VIDEO_RECORD , &pMediaType);

    hr = CloneVideoMediaType(pMediaType, guidEncodingType, &pMediaType2);

    hr = pMediaType->GetGUID(MF_MT_SUBTYPE, &guidSubType);

    if(guidSubType == MFVideoFormat_H264_ES || guidSubType == MFVideoFormat_H264 || guidSubType == MFVideoFormat_YUY2)
    {
        //When the webcam supports H264_ES or H264, we just bypass the stream. The output from Capture engine shall be the same as
		//the native type supported by the webcam
		hr = pMediaType2->SetGUID(MF_MT_SUBTYPE, MFVideoFormat_YUY2);
    }
    else
    {
        UINT32 uiEncodingBitrate;
        hr = GetEncodingBitrate(pMediaType2, &uiEncodingBitrate);

        hr = pMediaType2->SetUINT32(MF_MT_AVG_BITRATE, uiEncodingBitrate);
    }

    // Connect the video stream to the recording sink.
    DWORD dwSinkStreamIndex;
    hr = pRecord->AddStream((DWORD)MF_CAPTURE_ENGINE_PREFERRED_SOURCE_STREAM_FOR_VIDEO_RECORD, pMediaType2, NULL, &dwSinkStreamIndex);
done:
    SafeRelease(&pMediaType);
    SafeRelease(&pMediaType2);
    return hr;
}

HRESULT ConfigureAudioEncoding(IMFCaptureSource *pSource, IMFCaptureRecordSink *pRecord, REFGUID guidEncodingType)
{
    IMFCollection *pAvailableTypes = NULL;
    IMFMediaType *pMediaType = NULL;
    IMFAttributes *pAttributes = NULL;

    // Configure the audio format for the recording sink.

    HRESULT hr = MFCreateAttributes(&pAttributes, 1);

    // Enumerate low latency media types
    hr = pAttributes->SetUINT32(MF_LOW_LATENCY, TRUE);
       // Get a list of encoded output formats that are supported by the encoder.
    hr = MFTranscodeGetAudioOutputAvailableTypes(guidEncodingType, MFT_ENUM_FLAG_ALL | MFT_ENUM_FLAG_SORTANDFILTER,
        pAttributes, &pAvailableTypes);


    // Pick the first format from the list.

    hr = GetCollectionObject(pAvailableTypes, 0, &pMediaType);

    // Connect the audio stream to the recording sink.
    DWORD dwSinkStreamIndex;
    hr = pRecord->AddStream((DWORD)MF_CAPTURE_ENGINE_PREFERRED_SOURCE_STREAM_FOR_AUDIO, pMediaType, NULL, &dwSinkStreamIndex);
    if(hr == MF_E_INVALIDSTREAMNUMBER)
    {
        //If an audio device is not present, allow video only recording
        hr = S_OK;
    }
done:
    return hr;
}
I want to create a DXD11 pipeline for getting streams from the recorded video to send it to ssfsdk api method ( Smooth Streaming Format SDK)
hr = SSFMuxAddStream(hSSFMux, &streamInfo, &pCtx->dwStreamIndex);
And later using
hr = SSFMuxProcessOutput( pCtx->hSSFMux, pCtx->dwStreamIndex, &outputBuffer ); 
we will get chunks (streams) in output buffer.
I wanted to write these streams to publishpoint ( http://<server>/<pubpoint>/Streams(<identifier>) ) for live streaming.

I have found below code for this.
  fResult = WinHttpSendRequest(
                    hHttpRequest,
                    szAdditionalHeaders,
                    ARRAYSIZE(szAdditionalHeaders)-1,
                    WINHTTP_NO_REQUEST_DATA,
                    0,
                    0,
                    NULL
                    );
    if( !fResult )
    {
        hr = HRESULT_FROM_WIN32( GetLastError() );
        goto done;
    }

And here’s an example of a function for posting HTTP chunks (assuming WinHTTP is operating in “synchronous” mode):

     HRESULT WriteToSmoothStreamOutput(
        __in HINTERNET hHttpRequest,
        __in_ecount(cbData) LPCVOID pbData,
        __in ULONG cbData )
    {
        HRESULT hr = S_OK;
        BOOL fResult = FALSE;
        DWORD cbWritten;

        char szHttpChunkHeaderA[32];
        char szHttpChunkFooterA[] = "\r\n";

        //
        // Send the HTTP Chunk Transfer Encoding chunk-start mark
        // Observe the use of UTF-8 strings.
        //

        hr = StringCchPrintfA(
                szHttpChunkHeaderA,
                ARRAYSIZE(szHttpChunkHeaderA),
                "%X\r\n",
                cbData );

        fResult = WinHttpWriteData(
                        hHttpRequest,
                        szHttpChunkHeaderA,
                        (DWORD)( strlen(szHttpChunkHeaderA) * sizeof(char) ),
                        &cbWritten
                        );

        // Send the actual chunk data
        //

        if( cbData > 0 )
        {
            fResult = WinHttpWriteData(
                            hHttpRequest,
                            pbData,
                            cbData,&cbWritten
                            );

        }

        //
        // Send the HTTP Chunk Transfer Encoding chunk-end mark
        //

        fResult = WinHttpWriteData(
                        hHttpRequest,
                        szHttpChunkFooterA,
                        (DWORD)( strlen(szHttpChunkFooterA) * sizeof(char) ),
    &cbWritten
                        );

    done:
         return( hr );
    }  
But WinHttp is unable to connect and write streams to publishpoint for live streaming.

Please suggest for the above requirement.


Problem with HTTPSchemePlugin

$
0
0
I'm developing WPF application with video playback functionality. Video is transmitted over the network through HTTP protocol with custom encryption. To handle that I've created custom Scheme Handler that creates standard HttpSchemePlugin to avoid writing my own HTTP client. And everything works fine on Windows 7, 8, 8.1. But I have notebook with Windows 7 where HttpSchemePlugin creation takes a long time (around 5-10 minutes). Could you help me?

H264 encoded video can not be played by Windows Media Player 12

$
0
0

I have a H264 encoded video (self-generated) which can not be played by Windows Media Player 12, but by other video players such as VLC.

Unfortunately Media Player (version 12.0.7601.18526) does not provide any (error-)information why the video isn't played - the player is in play mode, the progress bar flickers, but unfortunately nothing to see. 

The non-playing video of inetreest can be found here: http://www.mediafire.com/download/x12gn3u5bmn6c1u/CapturedVideo-00037.ts 

Used H264 analysis tools indicate no errors, so it seems that the file content is not completely wrong - maybe only a flag needed by Windows Media Player is not set correctly.

Thanks in advance for any kind of helpful response.

waveoutopen crashes on 64 bit system

$
0
0

Hello There,

I am using winmm library for audio output. but in windows 7 64 bit system my application is crashes on waveoutopen api. How to manage this api with 32 and 64bit?

Thanks and Regards.

Windows 8.1 Email program default text (size, color, etc)

$
0
0

In the Windows 8 and 8.1 email program, it would be good if I could choose a default text size without having to choose the text size each time I write to a friend who is visually impaired.

Thanks!

Viewing all 1079 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>